From nift at maclisp.org Sun Oct 1 01:02:51 2017 From: nift at maclisp.org (Niels) Date: Sun, 01 Oct 2017 07:02:51 +0200 Subject: [Borgbackup] Backup fails with -> ValueError: time data '2017-09-11T23:54:21' does not match format '%Y-%m-%dT%H:%M:%S.%f' In-Reply-To: <87o9pt1xz2.fsf@mojo.lan> (Niels's message of "Fri, 29 Sep 2017 14:38:25 +0200") References: <87y3oz26ks.fsf@mojo.lan> <87o9pt1xz2.fsf@mojo.lan> Message-ID: <878tgv1mv8.fsf@mojo.lan> Niels writes: > Marian Beermann writes: > >> https://github.com/borgbackup/borg/issues/2994#issuecomment-331842939 > > Thanks for the pointer. I've removed the ".%f" in helpers.py as > described in the issue, on both the client and server. > > But unfortunately I get another error: > > Traceback (most recent call last): > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 2168, in main > exit_code = archiver.run(args) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 2104, in run > return set_ec(func(args)) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 107, in wrapper > return method(self, args, repository=repository, **kwargs) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 331, in do_create > create_inner(archive, cache) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 304, in create_inner > read_special=args.read_special, dry_run=dry_run) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 380, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 380, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 380, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 380, in _process > read_special=read_special, dry_run=dry_run) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archiver.py", line 361, in _process > status = archive.process_file(path, st, cache, self.ignore_inode) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archive.py", line 722, in process_file > self.add_item(item) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archive.py", line 295, in add_item > self.write_checkpoint() > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archive.py", line 299, in write_checkpoint > self.save(self.checkpoint_name) > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/archive.py", line 333, in save > self.manifest.write() > File "/usr/local/src/borg-env/lib/python3.5/site-packages/borg/helpers.py", line 311, in write > prev_ts = datetime.strptime(self.timestamp, "%Y-%m-%dT%H:%M:%S") > File "/usr/lib/python3.5/_strptime.py", line 510, in _strptime_datetime > tt, fraction = _strptime(data_string, format) > File "/usr/lib/python3.5/_strptime.py", line 346, in _strptime > data_string[found.end():]) > ValueError: unconverted data remains: .061399 It looked like it needed the ".%f" again, so I added it again to helpers.py, and ran a backup. And it finished :) And it does indeed say to add it again in the github issue. But I just exepected it to finish the backup correctly without the ".%f", and thus was a little bit puzzled when this error came up. Niels From melkor.lord at gmail.com Sun Oct 1 21:02:34 2017 From: melkor.lord at gmail.com (Melkor Lord) Date: Mon, 2 Oct 2017 03:02:34 +0200 Subject: [Borgbackup] Test : Borg vs Restic In-Reply-To: References: <0a354364-4c99-611b-a3d1-cf0dfa7399c2@waldmann-edv.de> Message-ID: On 13/09/2017 06:24, Melkor Lord wrote: Sorry for digging up an old thread but I'd like to know if there's a chance to address some the concerns/suggestions here? Thanks. > > Features I *DISLIKE* in Borg : > > ============================== > > > > - Writes several files OUTSIDE the repository, ~/.config/borg and > > ~/.cache/borg and AFAIK, there's no option to use another paths for > > these files. > > There is, see environment vars. > > > I've read them :-) BORG_CACHE_DIR is OK but I see no way to relocate the > ".config/borg" directory in its *entirety*! > > If I read the docs correctly, I can only relocate *parts* of ".config" > with BORG_KEYS_DIR and BORG_SECURITY_DIR but there are other files > besides these on the ".config/borg" directory. > > I'd really like a BORG_CONFIG_DIR variable to relocate the *whole* > config dir. I want everything in ONE predictable place, especially stuff > related to backups which is a really critical process in my PoV. [...] > > Features I *DISLIKE* in BOTH tools : > > ==================================== > > > > - Their design geared at "backup-and-push-to-repository" which is nice > > but not desired in my environment. I need a > > "repository-pulls-backup-from-agent" design. > > Matter of taste / threat model. > > My network is heavily firewall-ed everywhere (using the amazing > Shorewall). It would be a pure PITA to modify the firewall rules for > every new added host. > > Having any host being able to contact the backup server is just a plain > and simple "does not compute" in my head :-) Backups are way to critical > to allow ANY host (in need of a backup) being able to contact the backup > server. There's no way anyone can make this scenario secure at all times > I dare anyone to prove me wrong. > > Besides security, it's also impossible to control for sure the strain on > the backup server if all/most of the hosts hammer the backup server at > the same time. > > OTOH, a backup server which contacts the hosts in need of backup is > perfect from a security standpoint. No host can contact the backup > server, ever! The backup server only fetches DATA from the hosts so it > can't be compromised by remote execution scenarios (when hosts can > contact it). As of the strain part, the backup server can accept > concurrent backups if its strong enough or do it sequentially if not. > > At all times, the backup server is safe *AND* in control. I fail to see > any scenario where you can guarantee the same when hosts push to the > backup server. -- Unix _IS_ user friendly, it's just selective about who its friends are. From tw at waldmann-edv.de Mon Oct 2 20:09:00 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 3 Oct 2017 02:09:00 +0200 Subject: [Borgbackup] Test : Borg vs Restic In-Reply-To: References: <0a354364-4c99-611b-a3d1-cf0dfa7399c2@waldmann-edv.de> Message-ID: > Sorry for digging up an old thread but I'd like to know if there's a > chance to address some the concerns/suggestions here? Thanks. >> I'd really like a BORG_CONFIG_DIR variable ... https://github.com/borgbackup/borg/issues/3083 >> "repository-pulls-backup-from-agent" design. IIRC, there is already a ticket about pull mode. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sat Oct 7 20:40:01 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 8 Oct 2017 02:40:01 +0200 Subject: [Borgbackup] borgbackup 1.1.0 released! Message-ID: <079d3583-0d43-9703-7a3c-73057d51cd44@waldmann-edv.de> We finally have a new stable release: 1.1 https://github.com/borgbackup/borg/releases/tag/1.1.0 details see url, this time including a link to a shortened changelog (the detailled changelog is rather long this time). 1.0 is now "oldstable", please be careful when reading the docs that you read docs for the version you use. cheers, thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Sun Oct 8 04:16:14 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Sun, 8 Oct 2017 10:16:14 +0200 Subject: [Borgbackup] borgbackup 1.1.0 released! In-Reply-To: <079d3583-0d43-9703-7a3c-73057d51cd44@waldmann-edv.de> References: <079d3583-0d43-9703-7a3c-73057d51cd44@waldmann-edv.de> Message-ID: <3c42beb6-5691-7583-f6f6-e4c24881f8e6@oss.schwarz.eu> Hi Thomas, Am 08.10.2017 um 02:40 schrieb Thomas Waldmann: > We finally have a new stable release: 1.1 > > https://github.com/borgbackup/borg/releases/tag/1.1.0 thank you (+ all contributors) very much for this nice release. I have some questions about compatibility between 1.0/1.1. Generally from the release notes it seems like I can continue to use an older version (1.0) on the server with a new client (1.1). Is there any danger to my stored data? (I have only one client per repository so I'm not concerned about accessing the same repo with 1.0+1.1 clients.) I skimmed over the "Major new features" list and it seems as if I might be able to enjoy nearly all of these features just with a new client. Is that correct? How about the other way round: Using an older borg client while having borg 1.1 on the server? (Without special precautions not all of our servers will be upgraded at the same time so there will be a period where server+client versions don't match.) Again thanks a lot, I'm really looking forward to the versions view, tar export and automatic compression! Felix From mario at emmenlauer.de Sun Oct 8 06:35:49 2017 From: mario at emmenlauer.de (Mario Emmenlauer) Date: Sun, 8 Oct 2017 12:35:49 +0200 Subject: [Borgbackup] borgbackup 1.1.0 released! In-Reply-To: <079d3583-0d43-9703-7a3c-73057d51cd44@waldmann-edv.de> References: <079d3583-0d43-9703-7a3c-73057d51cd44@waldmann-edv.de> Message-ID: <32c3a8b0-2476-af65-5870-5463c9aee0b0@emmenlauer.de> Dear Thomas and all, thanks a lot for the great work! Its highly appreciated! All the best, Mario On 08.10.2017 02:40, Thomas Waldmann wrote: > We finally have a new stable release: 1.1 > > https://github.com/borgbackup/borg/releases/tag/1.1.0 > > details see url, this time including a link to a shortened changelog > (the detailled changelog is rather long this time). > > 1.0 is now "oldstable", please be careful when reading the docs that you > read docs for the version you use. > > cheers, thomas > From tw at waldmann-edv.de Sun Oct 8 12:24:16 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 8 Oct 2017 18:24:16 +0200 Subject: [Borgbackup] borgbackup 1.1.0 released! In-Reply-To: <3c42beb6-5691-7583-f6f6-e4c24881f8e6@oss.schwarz.eu> References: <079d3583-0d43-9703-7a3c-73057d51cd44@waldmann-edv.de> <3c42beb6-5691-7583-f6f6-e4c24881f8e6@oss.schwarz.eu> Message-ID: > Generally from the release notes it seems like I can continue to use an older > version (1.0) on the server with a new client (1.1). Yes. You might not get all the fixes / new features that way, but it should work. > Is there any danger to my stored data? AFAIK: no > I skimmed over the "Major new features" list and it seems as if I might be > able to enjoy nearly all of these features just with a new client. Is that > correct? Yes, most stuff is done client-side. > How about the other way round: Using an older borg client while having borg > 1.1 on the server? (Without special precautions not all of our servers will be > upgraded at the same time so there will be a period where server+client > versions don't match.) AFAIK, that should also work. If your Linux (or whatever) dist does not yet have 1.1 packaged, you can always just use the linux (...) binary from the github releases page and just put it into /usr/local/bin/borg11 or so. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Sun Oct 8 15:41:02 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Sun, 8 Oct 2017 21:41:02 +0200 Subject: [Borgbackup] borgbackup 1.1.0 released! In-Reply-To: References: <079d3583-0d43-9703-7a3c-73057d51cd44@waldmann-edv.de> <3c42beb6-5691-7583-f6f6-e4c24881f8e6@oss.schwarz.eu> Message-ID: Hi Thomas, thank you very much for your quick response :-) Looks like I can start upgrading pretty soon. Am 08.10.2017 um 18:24 schrieb Thomas Waldmann: > If your Linux (or whatever) dist does not yet have 1.1 packaged, you can > always just use the linux (...) binary from the github releases page and > just put it into /usr/local/bin/borg11 or so. Ah, good point. However the issue for us is more that not all servers update at the same time (by default) once we push an update to our internal repos. So it is not so much a matter of "not packaged" but more of upgrade operations. Felix From bayerse at gmail.com Tue Oct 10 16:03:45 2017 From: bayerse at gmail.com (Sebastian Bayer) Date: Tue, 10 Oct 2017 22:03:45 +0200 Subject: [Borgbackup] Are regular check necessary? Message-ID: Hello, just recently I started evaluating borg as possible replacement for my rsync + ZFS snapshots backup strategy. So far, everything looks very good and I like borg a lot. I have two questions: Q1: Are regular check necessary / recommended? Are there any best practices? Q2: What can lead to corruption of the archive? If this has been covered before, I'm sorry, but I could not find anything. FYI, my borg backend is FreeNAS (which is based on FreeBSD) and a mirrored ZFS pool, so there is (almost) no risk of bit rotting. Thanks Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Oct 10 16:34:06 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 10 Oct 2017 22:34:06 +0200 Subject: [Borgbackup] Are regular check necessary? In-Reply-To: References: Message-ID: <1f7981f8-427d-5e40-01d7-e2b9f4a49d3e@waldmann-edv.de> > just recently I started evaluating borg as possible replacement for my > rsync + ZFS snapshots backup strategy. > So far, everything looks very good and I like borg a lot. Glad you like it. \o/ > Q1: Are regular check necessary / recommended? Are there any best practices? Yes, do them once in a while. They can take quite some time, depending on your repo size and what exactly you choose to check and how fast your hw / network is. How often also depends on how reliable you think your hw is and how important your data is. Guess everything between once a week and once a year can make sense depending on that. > Q2: What can lead to corruption of the archive? Often it is some hardware issue: - hdd / ssd malfunctioning, media bitflips - memory malfunctioning (usually non-ECC memory) - other hw malfunctioning HW sometimes has some CRC or ECC checks in place, but they are relatively weak, so even with them there can be undetected corruption. Some hw does not have checks at all, e.g. non-ECC memory.(*) :( Sometimes there can be also other issues, like filesystem issues: - caused by fs driver errors - power failures - other system crashes Theoretically, also borg software bugs could corrupt a repo, but we did not have such bugs in recent versions. Borg tries to avoid that by all means by using a log-like data storage plus transactions. So even if a backup breaks down, we can just roll back the incomplete transaction to the previous (valid and consistent) repo state. > FYI, my borg backend is FreeNAS (which is based on FreeBSD) and a > mirrored ZFS pool, so there is (almost) no risk of bit rotting. That's a quite nice setup. If one disk has a undetected bitflip (undetected by the hardware), zfs can detect that and choose to use the valid data from the other disk. If you want more safety: - use ECC memory / other good/reliable hardware - do a 2nd borg backup to some other hardware / location You should avoid to lose data from a borg repo. As it is deduplicated, there is no redundancy. (*) https://github.com/borgbackup/borg/issues/2281 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From devzero at web.de Tue Oct 10 17:46:29 2017 From: devzero at web.de (devzero at web.de) Date: Tue, 10 Oct 2017 23:46:29 +0200 Subject: [Borgbackup] Are regular check necessary? In-Reply-To: References: Message-ID: hi, may i ask whats the reason why you want to replace zfs with borg? we are doing backup with rsync(via staging area)+borg and rsync(inplace)+zfs(rotating snapshots), i.e. despite of having a VM/block level backup we have our file level backups stored on two different archiving technologies to be on the safe side... regards roland > Gesendet: Dienstag, 10. Oktober 2017 um 22:03 Uhr > Von: "Sebastian Bayer" > An: borgbackup at python.org > Betreff: [Borgbackup] Are regular check necessary? > > Hello, > > just recently I started evaluating borg as possible replacement for my > rsync + ZFS snapshots backup strategy. > So far, everything looks very good and I like borg a lot. > > I have two questions: > > Q1: Are regular check necessary / recommended? Are there any best practices? > Q2: What can lead to corruption of the archive? > > If this has been covered before, I'm sorry, but I could not find anything. > > FYI, my borg backend is FreeNAS (which is based on FreeBSD) and a mirrored > ZFS pool, so there is (almost) no risk of bit rotting. > > Thanks > Sebastian > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From bayerse at gmail.com Wed Oct 11 03:12:57 2017 From: bayerse at gmail.com (Sebastian Bayer) Date: Wed, 11 Oct 2017 09:12:57 +0200 Subject: [Borgbackup] Are regular check necessary? In-Reply-To: References: Message-ID: Hey! @ Thomas Thank you for your detailed and interesting answer! I will schedule a check every other week then. Is my understanding of borg correct that I can also initiate the check from the server side? @ Roland I currently face three limitations that are both solved by borg: * move detection for large files and folders (e.g. I plan to re-organize my image folder, but was hesitant as rsync would copy everything again) * easier offsite copies of the repo: with ZFS I used a LUKS / geli encrypted drive + ZFS send for that. Now I can just use rsync the repo to some external drive and take that to the office / a bank safe / ... without worrying about the encryption. * easier offsite backups on other servers However, I agree that using a second backup strategy would be wise -- not putting all eggs in one basket. So maybe I will occasionally use rsync or dd. Regards Sebastian 2017-10-10 23:46 GMT+02:00 : > hi, > > may i ask whats the reason why you want to replace zfs with borg? > > we are doing backup with rsync(via staging area)+borg and > rsync(inplace)+zfs(rotating snapshots), i.e. despite of having a VM/block > level backup we have our file level backups stored on two different > archiving technologies to be on the safe side... > > regards > roland > > > Gesendet: Dienstag, 10. Oktober 2017 um 22:03 Uhr > > Von: "Sebastian Bayer" > > An: borgbackup at python.org > > Betreff: [Borgbackup] Are regular check necessary? > > > > Hello, > > > > just recently I started evaluating borg as possible replacement for my > > rsync + ZFS snapshots backup strategy. > > So far, everything looks very good and I like borg a lot. > > > > I have two questions: > > > > Q1: Are regular check necessary / recommended? Are there any best > practices? > > Q2: What can lead to corruption of the archive? > > > > If this has been covered before, I'm sorry, but I could not find > anything. > > > > FYI, my borg backend is FreeNAS (which is based on FreeBSD) and a > mirrored > > ZFS pool, so there is (almost) no risk of bit rotting. > > > > Thanks > > Sebastian > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jost+lists at dimejo.at Wed Oct 11 03:42:10 2017 From: jost+lists at dimejo.at (Alex JOST) Date: Wed, 11 Oct 2017 09:42:10 +0200 Subject: [Borgbackup] Are regular check necessary? In-Reply-To: References: Message-ID: Am 11.10.2017 um 09:12 schrieb Sebastian Bayer: > Hey! > > @ Thomas > Thank you for your detailed and interesting answer! I will schedule a check > every other week then. > Is my understanding of borg correct that I can also initiate the check from > the server side? We do checks and prunes on the server which has so far worked without any issues. If you plan to do this remember that Borg needs a cache for its operations. Depending on your data this might add some GB, but generally shouldn't be a problem. Additionally, if your data is encrypted the server will need access to the encryption key. Depending on your scenario this might not be desirable. -- Alex JOST From bayerse at gmail.com Wed Oct 11 06:27:49 2017 From: bayerse at gmail.com (Sebastian Bayer) Date: Wed, 11 Oct 2017 12:27:49 +0200 Subject: [Borgbackup] Are regular check necessary? In-Reply-To: References: Message-ID: Perfect, thank you! 2017-10-11 9:42 GMT+02:00 Alex JOST : > Am 11.10.2017 um 09:12 schrieb Sebastian Bayer: > >> Hey! >> >> @ Thomas >> Thank you for your detailed and interesting answer! I will schedule a >> check >> every other week then. >> Is my understanding of borg correct that I can also initiate the check >> from >> the server side? >> > > We do checks and prunes on the server which has so far worked without any > issues. If you plan to do this remember that Borg needs a cache for its > operations. Depending on your data this might add some GB, but generally > shouldn't be a problem. Additionally, if your data is encrypted the server > will need access to the encryption key. Depending on your scenario this > might not be desirable. > > -- > Alex JOST > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcio at verdesaine.net Mon Oct 16 12:14:19 2017 From: marcio at verdesaine.net (=?UTF-8?Q?M=C3=A1rcio_Moreira?=) Date: Mon, 16 Oct 2017 14:14:19 -0200 Subject: [Borgbackup] Backup of a synchronized folder Message-ID: Hello friends, All our local machines plus a remote (backup) server will have a folder syncronized with Syncthing (or similar). Borg will be installed on the remote server to make backups of its synchronized folder. What do you thing about this setup? May I have any kind of problem? I've heard that Syncthing synchronizes first to a temporary folder and there will be no problem with backups. But, anyway, I would like hear from you who really knows BorgBackup Thanks, Marcio From tw at waldmann-edv.de Mon Oct 16 18:20:51 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 17 Oct 2017 00:20:51 +0200 Subject: [Borgbackup] Backup of a synchronized folder In-Reply-To: References: Message-ID: Hi M?rcio, > All our local machines plus a remote (backup) server will have a> folder syncronized with Syncthing (or similar). Keep in mind that synchronization != backup. Some sync tools might keep some history, but better check if that is good enough for you. > Borg will be installed on the remote server to make backups of its > synchronized folder. > > What do you thing about this setup? > > May I have any kind of problem? There is nothing special with that, borg will just back up the files (on after the other) it finds in the source directories you give to it. Keep in mind that if you backup a rather active filesystem, the backup might work changing, inconsistent data, so you maybe want to at least make a (LVM or filesystem) snapshot and then backup the snapshot. That will at least give you crash-like consistency. If you need even better, you need to make sure that all applications / services have brought their on-disk state into consistency. > I've heard that Syncthing synchronizes first to a temporary folder and > there will be no problem with backups. But, anyway, I would like hear > from you who really knows BorgBackup borg will just backup file-by-file whatever it finds then. You have to make sure that the state of each file is an actually useful / consistent state. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From ldl08 at gmx.net Tue Oct 17 18:07:49 2017 From: ldl08 at gmx.net (ldl08 at gmx.net) Date: Wed, 18 Oct 2017 00:07:49 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) Message-ID: An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Oct 17 19:49:41 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 18 Oct 2017 01:49:41 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: Message-ID: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> > My backup is encrypted, as it should be. I am unable, however, to mount > the repository to an unencrypted state. I need the full borg command you use (without the passphrase in case it should be in the command). And also a small example of what's visible below the mountpoint then and how you see it is still encrypted. Can you use borg extract to extract decrypted data from the repo? Any error messages? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From baz at irc.msk.ru Wed Oct 18 06:54:27 2017 From: baz at irc.msk.ru (Alexey Bazhin) Date: Wed, 18 Oct 2017 13:54:27 +0300 Subject: [Borgbackup] borg recreate Message-ID: <20171018135427.9e5508e1b4ea5864685db733@irc.msk.ru> Hi! Is running borg recreate on repository absolutely the same as running borg recreate on all archives in repository separately? -- Alexey Bazhin From ldl08 at gmx.net Wed Oct 18 17:56:08 2017 From: ldl08 at gmx.net (ldl08 at gmx.net) Date: Wed, 18 Oct 2017 23:56:08 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Wed Oct 18 21:13:28 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 19 Oct 2017 03:13:28 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> Message-ID: <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> > NOTE: after the borg mount command the folder 'home' is created which I > was unable to access via bash. Don't use "sudo" (just "borg mount ..."). Or if you do, use the fuse mount option that gives other users access, see "man fuse". > However, with a file manager, started as > root, I can see that there is a folder within 'home' with the name > '.ecryptfs' -- which, in turn, contains further folders and files > (mostly encrpyted). These are encrypted files from ecrypts ("encrypted home directories" in Ubuntu). If you backed them up that way, this is expected (and completely unrelated to borg's encryption). ? > david at lubuntu:~$ sudo borg extract --dry-run > /media/veracrypt1/home_repository/backup::lubuntu-2017-10-17-2251 > ~/borg_extract_folder/ That's not the way it works. The param right to the repo::archive is NOT the path to extract to, but a pattern to match the files you want to extract. And this is why you get this: > Include pattern '/home/david/borg_extract_folder/' never matched. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From ldl08 at gmx.net Thu Oct 19 12:33:52 2017 From: ldl08 at gmx.net (ldl08 at gmx.net) Date: Thu, 19 Oct 2017 18:33:52 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Oct 19 12:40:03 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 19 Oct 2017 18:40:03 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: > The reason I used 'sudo' is that my script, which creates my backups, is > executed as root (via 'sudo'). > I will see if I can mount without root rights. borg needs permissions to access the repo files. so, always run the borg that accesses the repo files as the same user. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From ldl08 at gmx.net Thu Oct 19 12:44:05 2017 From: ldl08 at gmx.net (ldl08 at gmx.net) Date: Thu, 19 Oct 2017 18:44:05 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Oct 19 14:15:08 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 19 Oct 2017 20:15:08 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: On 10/19/2017 06:44 PM, ldl08 at gmx.net wrote: > Given your earlier hint ("don't use sudo for the mount operation") this using root for borg mount and then trying to access that with current (non-root) user does not work due to fuse. you need the "allow other" option due to that, see "man fuse". > seems to suggest that borg backups should not be done as root. Is > avoiding root rights therefore 'best practice'? best practice (in general, not just with borg) is not to use root when you don't need it. for backups you will need root, if you want to back up files not readable by the current user. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Thu Oct 19 21:47:46 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 20 Oct 2017 03:47:46 +0200 Subject: [Borgbackup] borg 1.1 performance (and also 1.0) Message-ID: <1eeca20a-54df-91ab-644b-5e4ab922b48a@waldmann-edv.de> Had some time and played around, looking for maximum borg performance I could see on my systems, also comparing 1.1 to 1.0. I used a standard ubuntu iso, so you can do the same measurement. System used for this: Xeon E5-2667 v2 (oldie, but goldie) 64GB RAM (mostly unused for this) Samsung 960 Pro SSD (PCIe x4, NVME) Ubuntu Linux 16.04 64bit ext4 filesystem ============================================================================== borg 1.1.1.dev36+g40186a3 (close to what will be in 1.1.1 release soon) $ BORG_PASSPHRASE=secret borg init -e authenticated-blake2 repo $ sudo dropcache $ BORG_PASSPHRASE=secret borg create --stats repo::ubuntu-iso ubuntu-16.04.3-desktop-amd64.iso ------------------------------------------------------------------------------ Archive name: ubuntu-iso Archive fingerprint: 6c5c62ee1fba8ba51a66c5be12de0eeeaa04935d2a926cb5daa4a112a3f61f53 Time (start): Fri, 2017-10-20 02:30:15 Time (end): Fri, 2017-10-20 02:30:24 Duration: 8.06 seconds Number of files: 1 Utilization of max. archive size: 0% ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 1.59 GB 1.58 GB 1.58 GB All archives: 1.59 GB 1.58 GB 1.58 GB Unique chunks Total chunks Chunk index: 598 598 ------------------------------------------------------------------------------ 197 MB/s borg create throughput. Yay! If you get a faster speed here, let us know. $ sudo dropcache $ BORG_PASSPHRASE=secret borg create --stats --files-cache=disabled repo::ubuntu-iso2 ubuntu-16.04.3-desktop-amd64.iso ------------------------------------------------------------------------------ Archive name: ubuntu-iso2 Archive fingerprint: b5929010bbef035e3f5a97cdb077f7ee720f93d0bb741e0bb24e383d7c0e59f4 Time (start): Fri, 2017-10-20 02:42:12 Time (end): Fri, 2017-10-20 02:42:17 Duration: 5.07 seconds Number of files: 1 Utilization of max. archive size: 0% ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 1.59 GB 1.58 GB 473 B All archives: 3.18 GB 3.17 GB 1.58 GB Unique chunks Total chunks Chunk index: 599 1196 ------------------------------------------------------------------------------ This was just testing how fast it gets when it has to read, chunk, hash, but no data is written to the repo: 313 MB/s ============================================================================== borg 1.0.7: $ ./borg-1.0.7 init -e none repo10 $ sudo dropcache $ ./borg-1.0.7 create -v --stats repo10::ubuntu-iso ubuntu-16.04.3-desktop-amd64.iso ------------------------------------------------------------------------------ Archive name: ubuntu-iso Archive fingerprint: 0714f2c23c0471b3b82344afbb8a99c89e41275b24f715cb8c380d7e67df1349 Time (start): Fri, 2017-10-20 02:57:51 Time (end): Fri, 2017-10-20 02:58:05 Duration: 14.11 seconds Number of files: 1 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 1.59 GB 1.59 GB 1.59 GB All archives: 1.59 GB 1.59 GB 1.59 GB Unique chunks Total chunks Chunk index: 592 592 ------------------------------------------------------------------------------ Although this repo is not authenticated, borg 1.0 is slower: 113 MB/s $ sudo dropcache $ ./borg-1.0.7 create -v --stats --no-files-cache repo10::ubuntu-iso2 ubuntu-16.04.3-desktop-amd64.iso ------------------------------------------------------------------------------ Archive name: ubuntu-iso2 Archive fingerprint: f1f72b7ef1f4941ad447ef1e0f62a6faaa70b7e52c280648ef17e77a924edf92 Time (start): Fri, 2017-10-20 02:58:25 Time (end): Fri, 2017-10-20 02:58:33 Duration: 8.02 seconds Number of files: 1 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 1.59 GB 1.59 GB 277 B All archives: 3.18 GB 3.18 GB 1.59 GB Unique chunks Total chunks Chunk index: 593 1184 ------------------------------------------------------------------------------ Reading / chunking / hashing also slower, about 200 MB/s. ============================================================================== dropcache script: # free pagecache, dentries and inodes sync echo 3 > /proc/sys/vm/drop_caches ============================================================================== Notes: - Don't be disappointed if your daily backup does not run that fast. - Small files will always be much slower due to access time and metadata processing overhead. - HDDs and also most SSDs are slower than the one used here. - The CPU used is some years old, but has 25MB Cache and can turbo to 4.0 GHz. - I did not use encryption here. With repokey-blake2 it is 144 MB/s. - The borg 1.0 repo was uncompressed, borg 1.1 tried lz4 compression (but as the input data was compressed already, that did not win much). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From _ at thomaslevine.com Fri Oct 20 06:47:31 2017 From: _ at thomaslevine.com (Thomas Levine) Date: Fri, 20 Oct 2017 10:47:31 +0000 Subject: [Borgbackup] Lots of files that change rarely and predictably In-Reply-To: <41e96e6b-bd08-aff8-5e6c-07f047d53524@waldmann-edv.de> References: <20170911102459.00CD27E850@mailuser.nyi.internal> <41e96e6b-bd08-aff8-5e6c-07f047d53524@waldmann-edv.de> Message-ID: <20171020104736.3616F7F91B@mailuser.nyi.internal> Indeed, this is the annoying thing about MH format, but everything else about it is so nice. > I am not sure this is doable. You'ld still have to look into the > directory for new files. Borg's files cache lookup only needs to know > mtime, size and inode number to decide which files did not change. I think I was unclear. The recent mail folders and files are practically the only ones that ever change, so I want to tell borg to assume that a file has stayed the same if it is outside of the recent mail directory. I do not have to look at the mtime, size, nor inode number any old file because I already know that I did not change it. If I ever change the old emails, I will run the normal command. The recent mail folders presently contain about 6,765 emails total, and this is far less than the total 670,683 among all of the files. I arrived on an approach of making two types of archives, one with all 670,683 files, (Call this the "full" backup.) and another with just the 6,765. (Call this the "recent" backup.) I would usually run the recent backup, and I would run the full backup only when I changed files in other directories. When I restore backups, I first extract the newest full backup, and then I extract the newest recent backup on top of that. I compared these two backup styles in borg 1.0.11 with the following commands, run in succession. The first one is the recent backup, and the second is the full backup. $ time borg create --compression lzma,9 \ --exclude ,\* -v -e=repokey --exclude-caches \ /repository/mh::recent-2017-10-20-laxar.laxask \ context folders drafts inbox archive/2017-07/ a b c current sent # 1m06.12s real 0m33.13s user 0m30.90s system $ time borg create --compression lzma,9 \ --exclude ,\* -v -e=repokey --exclude-caches \ /repository/mh::full-2017-10-20-laxar.laxask \ # 3m39.66s real 2m30.03s user 1m04.77s system While the difference seems significant, it is not very large. In this comparison I used SSD as the storage medium. I think the difference could matter only on slow storage media. I am going to stick with just doing full backups, as that they're don't seem much slower and they gives me less to think about. If I find myself using slow storage media for these data, I'll compare them again. I was using MicroSDHC with ext2 filesystem as the storage medium when I originally inquired about this style of backup, and so I may have at the time had the slowest possible email storage stack. From ldl08 at gmx.net Fri Oct 20 16:22:52 2017 From: ldl08 at gmx.net (David Luebeck) Date: Fri, 20 Oct 2017 22:22:52 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From plattrap at lholding.net Fri Oct 20 17:19:01 2017 From: plattrap at lholding.net (Larry Holding) Date: Sat, 21 Oct 2017 10:19:01 +1300 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: It appears your data is inside an encrypted folder inside an encrypted backup. Can you provide the output of ?borg list /media/veracrypt1/home_repository/backup::lubuntu-2017-10-17-2251? which will show what flies were backed up? Sent from my toaster. > On 21/10/2017, at 09:22, David Luebeck wrote: > > Dear list, > > I was unable to make mount work with fuse -- and have given up (I have never worked with fuse before). > > So I tried the "borg extract" command, which, surprisingly, also failed -- I am at a loss. > > What I am trying to do is to extract a directory (home/david/Tools/Assertions) and its content to the current location. home/david/Tools/Assertions is a directory that should have been backed up by borg. > > Yet, I get a "Include pattern 'home/david/Tools/Assertions' never matched." comment. May I kindly ask what is going on? How can I verify what files are part of the backup (without using fuse)? Am I misunderstanding the documentation? > > Here is my bash history: > > root at lubuntu:~# cd borg_mp/ > root at lubuntu:~/borg_mp# borg extract /media/veracrypt1/home_repository/backup::lubuntu-2017-10-17-2251 home/david/Tools/Assertions > Enter passphrase for key /media/veracrypt1/home_repository/backup: > Include pattern 'home/david/Tools/Assertions' never matched. > root at lubuntu:~/borg_mp# > > Thanks for your continued help! > > David > > > Gesendet: Donnerstag, 19. Oktober 2017 um 20:15 Uhr > Von: "Thomas Waldmann" > An: borgbackup at python.org > Betreff: Re: [Borgbackup] impossible to mount encrypted repository (via fuse) > On 10/19/2017 06:44 PM, ldl08 at gmx.net wrote: > > Given your earlier hint ("don't use sudo for the mount operation") this > > using root for borg mount and then trying to access that with current > (non-root) user does not work due to fuse. you need the "allow other" > option due to that, see "man fuse". > > > seems to suggest that borg backups should not be done as root. Is > > avoiding root rights therefore 'best practice'? > > best practice (in general, not just with borg) is not to use root when > you don't need it. > > for backups you will need root, if you want to back up files not > readable by the current user. > > > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From ldl08 at gmx.net Fri Oct 20 18:11:41 2017 From: ldl08 at gmx.net (David Luebeck) Date: Sat, 21 Oct 2017 00:11:41 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri Oct 20 18:24:13 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 21 Oct 2017 00:24:13 +0200 Subject: [Borgbackup] Lots of files that change rarely and predictably In-Reply-To: <20171020104736.3616F7F91B@mailuser.nyi.internal> References: <20170911102459.00CD27E850@mailuser.nyi.internal> <41e96e6b-bd08-aff8-5e6c-07f047d53524@waldmann-edv.de> <20171020104736.3616F7F91B@mailuser.nyi.internal> Message-ID: <7925396a-8817-a777-7cb1-13df2e130b9e@waldmann-edv.de> > I think I was unclear. The recent mail folders and files are practically > the only ones that ever change, so I want to tell borg to assume that a > file has stayed the same if it is outside of the recent mail directory. I understood that, but it would be somehow weird (or at least "very special") for a backup tool to rely on that without even looking. You have to keep in mind that borg always does full backups, so the backup that works your way would include directory contents borg did not even look at. Besides that, there is a slight additional problem that borg always reads xattrs/acls from the fs as they are not contained in the files cache. > I arrived on an approach of making two types of archives, one with all > 670,683 files, (Call this the "full" backup.) and another with just the > 6,765. (Call this the "recent" backup.) I would usually run the recent > backup, and I would run the full backup only when I changed files in > other directories. When I restore backups, I first extract the newest > full backup, and then I extract the newest recent backup on top of that. Yeah, guess that would work. > $ time borg create --compression lzma,9 \ Don't use lzma levels > 6 with borg, it is just a waste of cpu cycles and won't improve compression due to borg's chunk size. > I was using MicroSDHC with ext2 filesystem as the storage medium when > I originally inquired about this style of backup, and so I may have at > the time had the slowest possible email storage stack. I've bad experiences with SD card reliability. Would never use them for backups. Also performance is usually less than great. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Fri Oct 20 18:28:47 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 21 Oct 2017 00:28:47 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: David, you can only get back from a borg backup archive what was put into it. It looks like you archived encrypted files (encrypted directory names, file names and file content). So if that is all you have and you did not also archive the "unencrypted view" onto these files (as offered after "opening" the ecryptfs with your encryption key / password), your only way to proceed is to extract that all and then open it with ecryptfs to get the decrypted view onto it. From _ at thomaslevine.com Fri Oct 20 18:41:07 2017 From: _ at thomaslevine.com (Thomas Levine) Date: Fri, 20 Oct 2017 22:41:07 +0000 Subject: [Borgbackup] Lots of files that change rarely and predictably In-Reply-To: <7925396a-8817-a777-7cb1-13df2e130b9e@waldmann-edv.de> References: <20170911102459.00CD27E850@mailuser.nyi.internal> <41e96e6b-bd08-aff8-5e6c-07f047d53524@waldmann-edv.de> <20171020104736.3616F7F91B@mailuser.nyi.internal> <7925396a-8817-a777-7cb1-13df2e130b9e@waldmann-edv.de> Message-ID: <20171020224115.ABD757FA7E@mailuser.nyi.internal> Thank you for the tip on lzma levels. Putting the cache and repository on the SD card would probably make things slow too. But if nothing in the checkout has changed, I think the most significant part would still be the scan of the files in my checkout, as the most of the filesystem access would happen there. Slow storage paradigms are sometimes worth it, as I mentioned with MH. SD card in a Raspberry Pi is the best approach I have come up with for a disposable computer. (I say Raspberry Pi because it is the easiest one to source in most countries; most of the competition is better if you don't need to be able to rebuild quickly after unexpectedly throwing your computer away.) From ldl08 at gmx.net Fri Oct 20 18:43:06 2017 From: ldl08 at gmx.net (ldl08 at gmx.net) Date: Sat, 21 Oct 2017 00:43:06 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From plattrap at lholding.net Fri Oct 20 22:01:50 2017 From: plattrap at lholding.net (Lawrence Holding) Date: Sat, 21 Oct 2017 15:01:50 +1300 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: It looks like you have backed up the encrypted image of your drive, this is why the extract cannot find the files you are looking for. When you run the "rsync" you are pointing at folders that have already been decrypted, but when you are running the "borg backup" you are backing up the encrypted image of the folders. So you will need to do a "borg mount" as your normal user on the root of your backup, to a folder you can write to. Then inside that you will need to mount the encrypted container you backed up using the same password you used for that, and then you can restore the files you wanted. The borg layer is encrypted with your borg keys/password, and the encryped image you packed up is encrypted with your filesystem / user keys. Enjoy, On 21 October 2017 at 11:11, David Luebeck wrote: > Hello Larry, > > the backup is in an encrypted external hard drive. And borgbackup also > encrypts, yes. > > Here is an example of the borg list output (after decrypting the external > hard drive, running as root): > > ECRYPTFS_FNEK_ENCRYPTED.FWbEL52sT4kolURbQni4U5yZ9gBIJR > bzRn3kGNrmwdiqnats88cR4I1liU--/ECRYPTFS_FNEK_ENCRYPTED. > FWbEL52sT4kolURbQni4U5yZ9gBIJRbzRn3kIqozIFMMDqo.ODmG6KIQw--- > /ECRYPTFS_FNEK_ENCRYPTED.FWbEL52sT4kolURbQni4U5yZ9gBIJR > bzRn3k1jBNWlZdVAkKJi3QnpN7S---/ECRYPTFS_FNEK_ENCRYPTED. > FWbEL52sT4kolURbQni4U5yZ9gBIJRbzRn3kMcN7e5LaqnaYPt.dON8RPE-- > -rw------- david david 16384 Fri, 2017-04-21 19:36:47 > home/.ecryptfs/david/.Private/ECRYPTFS_FNEK_ENCRYPTED. > FWbEL52sT4kolURbQni4U5yZ9gBIJRbzRn3k-2NX4IMEay4a0R7JZvpEwk-- > /ECRYPTFS_FNEK_ENCRYPTED.FWbEL52sT4kolURbQni4U5yZ9gBIJR > bzRn3kGNrmwdiqnats88cR4I1liU--/ECRYPTFS_FNEK_ENCRYPTED. > FWbEL52sT4kolURbQni4U5yZ9gBIJRbzRn3kIqozIFMMDqo.ODmG6KIQw--- > /ECRYPTFS_FNEK_ENCRYPTED.FWbEL52sT4kolURbQni4U5yZ9gBIJR > bzRn3k1jBNWlZdVAkKJi3QnpN7S---/ECRYPTFS_FNEK_ENCRYPTED. > FWbEL52sT4kolURbQni4U5yZ9gBIJRbzRn3kMcTUOnV0.r.0Z7ac---zwk-- > -rw------- david david 16384 Fri, 2017-04-21 19:36:48 > home/.ecryptfs/david/.Private/ECRYPTFS_FNEK_ENCRYPTED. > FWbEL52sT4kolURbQni4U5yZ9gBIJRbzRn3k-2NX4IMEay4a0R7JZvpEwk--/ECRYPTFS_FNEK_ENC^CKeyboard > interrupt. > > Thanks for your support! > > David > > > *Gesendet:* Freitag, 20. Oktober 2017 um 23:19 Uhr > *Von:* "Larry Holding" > *An:* "David Luebeck" > *Cc:* "Thomas Waldmann" , borgbackup at python.org > > *Betreff:* Re: [Borgbackup] impossible to mount encrypted repository (via > fuse) > It appears your data is inside an encrypted folder inside an encrypted > backup. > > Can you provide the output of ?borg list /media/veracrypt1/home_ > repository/backup::lubuntu-2017-10-17-2251? which will show what flies > were backed up? > > > Sent from my toaster. > > On 21/10/2017, at 09:22, David Luebeck wrote: > > > Dear list, > > I was unable to make mount work with fuse -- and have given up (I have > never worked with fuse before). > > So I tried the "borg extract" command, which, surprisingly, also failed -- > I am at a loss. > > What I am trying to do is to extract a directory > (home/david/Tools/Assertions) and its content to the current location. > home/david/Tools/Assertions is a directory that should have been backed up > by borg. > > Yet, I get a "Include pattern 'home/david/Tools/Assertions' never > matched." comment. May I kindly ask what is going on? How can I verify what > files are part of the backup (without using fuse)? Am I misunderstanding > the documentation? > > Here is my bash history: > > root at lubuntu:~# cd borg_mp/ > root at lubuntu:~/borg_mp# borg extract /media/veracrypt1/home_ > repository/backup::lubuntu-2017-10-17-2251 home/david/Tools/Assertions > Enter passphrase for key /media/veracrypt1/home_repository/backup: > Include pattern 'home/david/Tools/Assertions' never matched. > root at lubuntu:~/borg_mp# > > Thanks for your continued help! > > David > > > *Gesendet:* Donnerstag, 19. Oktober 2017 um 20:15 Uhr > *Von:* "Thomas Waldmann" > *An:* borgbackup at python.org > *Betreff:* Re: [Borgbackup] impossible to mount encrypted repository (via > fuse) > On 10/19/2017 06:44 PM, ldl08 at gmx.net wrote: > > Given your earlier hint ("don't use sudo for the mount operation") this > > using root for borg mount and then trying to access that with current > (non-root) user does not work due to fuse. you need the "allow other" > option due to that, see "man fuse". > > > seems to suggest that borg backups should not be done as root. Is > > avoiding root rights therefore 'best practice'? > > best practice (in general, not just with borg) is not to use root when > you don't need it. > > for backups you will need root, if you want to back up files not > readable by the current user. > > > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ldl08 at gmx.net Sat Oct 21 12:39:12 2017 From: ldl08 at gmx.net (David Luebeck) Date: Sat, 21 Oct 2017 18:39:12 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From devzero at web.de Sat Oct 21 15:28:54 2017 From: devzero at web.de (devzero at web.de) Date: Sat, 21 Oct 2017 21:28:54 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: from a user perspective, borg does the same as rsync: it saves files to another location. but only those being told to. you would have the same problem with rsync, i guess. why should borg read below the fuse mount when rsync doesn't ? regards roland > Gesendet: Samstag, 21. Oktober 2017 um 18:39 Uhr > Von: "David Luebeck" > An: "Thomas Waldmann" > Cc: borgbackup at python.org > Betreff: Re: [Borgbackup] impossible to mount encrypted repository (via fuse) > > > > Thanks Thomas and Larry for the guidance! > ? > Ii am trying to summarize the situation (and my understanding) in the hope that it might help others in the future: > ? > My laptop's OS (Lubuntu) is encryted using the "full disc encryption" that is offered as when installing the OS (alternate installer). > It seems that in this case the decryption of files and folders in done 'on the fly' by Lubuntu as is required. In other words: most of the files/folders on my hdd remain encrypted until access to them is required, when they are being decrypted. > ? > This 'decryption on the fly' happens, for example, when a full hdd backup is run by rsync. > I now understand that when I use borg on my OS (Lubuntu), borg actually does not trigger the OS to decrypt on the fly (unlike rsync). As a consequence, data backed up by borg has been copied in its encrypted form. > ? > All this is fully unrelated to borg's own encryption mechanism: should I choose not to make use of borg's encryption capabilities, the backup would still be encryted (the original Lubuntu encryption). > ? > So: > - using borg's encryption on a fully encrypted hdd results in a double-layered encryption -- which is certainly not what you want > - if you want to use borg to back up your fully encrypted hdd, you either must a) ensure that borg triggers the OS' "decryption on the fly" of the hdd, or b) must make sure that the OS fully decrypts the entire hdd before you run the backup with borg. > ? > May I ask whether my understanding so far is correct, and if so, which of the two solutions (a. make borg to trigger the OS to decrypt on the fly OR b. make the OS to fully decrypt the hdd before you run borg) is the way to go. > ? > Thanks for your clarification, > ? > David > ? > ? > It looks like you archived encrypted files (encrypted directory names, > file names and file content). > > So if that is all you have and you did not also archive the "unencrypted > view" onto these files (as offered after "opening" the ecryptfs with > your encryption key / password), your only way to proceed is to extract > that all and then open it with ecryptfs to get the decrypted view onto it. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > ? > ?_______________________________________________ Borgbackup mailing list Borgbackup at python.org https://mail.python.org/mailman/listinfo/borgbackup From ldl08 at gmx.net Sat Oct 21 16:39:00 2017 From: ldl08 at gmx.net (David Luebeck) Date: Sat, 21 Oct 2017 22:39:00 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <926185af-9ba2-9118-5f69-46b87fb2fa36@waldmann-edv.de> <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Sat Oct 21 17:00:48 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 21 Oct 2017 23:00:48 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: References: <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> Message-ID: <60775347-44c2-d454-e68f-866f57fb437c@waldmann-edv.de> David, you have to get a deeper understanding about what encryption mechanisms you use on your system and how they work. Just look at the configs, read docs about what ecryptfs is and how it works, read about full disk encryption on linux (usually dm-crypt, LUKS). Your current assumption seems wrong as far as I can tell: - you are using ecryptfs, obviously ("home directory encryption" in ubuntu) - this is where the strange dir and file names come from. - you are either not using full disk encryption (dm-crypt), or - even if you do, it is not relevant for the current case as with dm-crypt you never see such strange dir and file names. either you have the device "opened", then you have a decrypted device mapper device and everything looks normal/decrypted when using that. Using ecryptfs additionally to dm-crypt might not make sense for a lot of scenarios. Only do that if you positively know why you need that. If you start from wrong assumptions, you might get more and more into even more strange to bizarre assumptions about how your system (or borg) works. I'ld guess it is way simpler than you currently think it is. While borg is far more advanced than rsync, in the end both are just reading files via the normal file reading functions. No magic. So, you likely just backed up ecryptfs' encrypted backend files and you maybe rather wanted to backup the (decrypted) files inside the mounted ecryptfs. BTW, i'ld like to note that configuring disk / fs encryption is not on-topic on this list, so if you (after reading the docs) have more questions about that, maybe ask on a list or forum where it is on topic. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From ldl08 at gmx.net Sat Oct 21 17:11:13 2017 From: ldl08 at gmx.net (David Luebeck) Date: Sat, 21 Oct 2017 23:11:13 +0200 Subject: [Borgbackup] impossible to mount encrypted repository (via fuse) In-Reply-To: <60775347-44c2-d454-e68f-866f57fb437c@waldmann-edv.de> References: <48daefc8-fd7f-7323-dc60-c0dd6fb0c4f2@waldmann-edv.de> <60775347-44c2-d454-e68f-866f57fb437c@waldmann-edv.de> Message-ID: An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Sun Oct 22 17:42:58 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 22 Oct 2017 23:42:58 +0200 Subject: [Borgbackup] borgbackup 1.1.1 bugfix release Message-ID: https://github.com/borgbackup/borg/releases/tag/1.1.1 details see url. cheers, thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Tue Oct 24 10:58:19 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Tue, 24 Oct 2017 16:58:19 +0200 Subject: [Borgbackup] ability to hide message about old borg server? Message-ID: Hi, I still use a borg 1.0 server with 1.1 clients. This seems to work pretty well so far but I get this message: Remote: Borg 1.0.9: exception in RPC call:out Remote: Traceback (most recent call last): Remote: File "/usr/lib/python3/dist-packages/borg/remote.py", line 108, in serve Remote: raise InvalidRPCMethod(method) Remote: borg.remote.InvalidRPCMethod: get_free_nonce Remote: Platform: ... Remote: Linux: ... ... Remote: sys.argv: ['/usr/bin/borg', 'serve', '--umask=077', '--info'] Remote: SSH_ORIGINAL_COMMAND: None Remote: Please upgrade to borg version 1.1+ on the server for safer AES-CTR nonce handling. Is there a way to suppress this message? Upgrading the server is on my list but it will take a bit until I can do it so it would be nice to have less output from backups (= no cron mails :-). Felix From gait at ATComputing.nl Wed Oct 25 07:26:32 2017 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 25 Oct 2017 13:26:32 +0200 Subject: [Borgbackup] How can I tell borg not to limit output to 80 chars/line? Message-ID: Hello, I'm using borg 1.1.1. on a FreeBSD-system with loglevel '--debug' running in the background with nohup (so it's not interactive). When running from a script in the background and using nohup, how can I tell borg not to limit its output to 80 chars/line? (I do appreciate that lines only contain a ^M, thats OK with me ;-) I tried prepending stdbuf, like this: # stdbuf -e L -o L borg ... to no avail. Anyone? Gerrit Example output: 0 B O 0 B C 0 B D 0 N data/backup/.zfs/snapshot/borg-2017-10-23T16:33:25 0 B O 0 B C 0 B D 1 N data/backup/.zfs/snapshot/...monthly.5/backup/.autorelabel Initializing cache transaction: Reading config Initializing cache transaction: Reading chunks Initializing cache transaction: Reading files 129.99 kB O 86.66 kB C 0 B D 1 N data/backup/.zfs/sna...onthly.5/backup/bin/cpio 1.18 MB O 772.24 kB C 0 B D 13 N data/backup/.zfs/sna...ly.5/backup/bin/hostname From plattrap at lholding.net Wed Oct 25 13:19:56 2017 From: plattrap at lholding.net (Lawrence Holding) Date: Thu, 26 Oct 2017 06:19:56 +1300 Subject: [Borgbackup] How can I tell borg not to limit output to 80 chars/line? In-Reply-To: References: Message-ID: Have you tried redirecting the output of borg? e.g "borg create ... | cat -" which may tell borg there is no shell to adapt to. On 26 October 2017 at 00:26, Gerrit A. Smit wrote: > Hello, > > > I'm using borg 1.1.1. on a FreeBSD-system with loglevel '--debug' > running in the background with nohup (so it's not interactive). > > When running from a script in the background and using nohup, > how can I tell borg not to limit its output to 80 chars/line? > (I do appreciate that lines only contain a ^M, thats OK with me ;-) > > I tried prepending stdbuf, like this: > > # stdbuf -e L -o L borg ... > > to no avail. > > Anyone? > > > Gerrit > > > Example output: > > 0 B O 0 B C 0 B D 0 N data/backup/.zfs/snapshot/borg-2017-10-23T16:33:25 > 0 B O 0 B C 0 B D 1 N data/backup/.zfs/snapshot/...m > onthly.5/backup/.autorelabel > Initializing cache transaction: Reading config > Initializing cache transaction: Reading chunks > Initializing cache transaction: Reading files > > 129.99 kB O 86.66 kB C 0 B D 1 N data/backup/.zfs/sna...onthly. > 5/backup/bin/cpio > 1.18 MB O 772.24 kB C 0 B D 13 N data/backup/.zfs/sna...ly.5/ba > ckup/bin/hostname > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Oct 26 01:39:37 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 26 Oct 2017 07:39:37 +0200 Subject: [Borgbackup] ability to hide message about old borg server? In-Reply-To: References: Message-ID: <01c51927-1bba-925b-76dd-e207a31e0842@waldmann-edv.de> On 10/24/2017 04:58 PM, Felix Schwarz wrote: > I still use a borg 1.0 server with 1.1 clients. This seems to work pretty well > so far but I get this message: > > Remote: Borg 1.0.9: exception in RPC call:out > Remote: Traceback (most recent call last): > Remote: File "/usr/lib/python3/dist-packages/borg/remote.py", line 108, in serve > Remote: raise InvalidRPCMethod(method) > Remote: borg.remote.InvalidRPCMethod: get_free_nonce > Remote: Platform: ... > Remote: Linux: ... > ... > Remote: sys.argv: ['/usr/bin/borg', 'serve', '--umask=077', '--info'] > Remote: SSH_ORIGINAL_COMMAND: None > Remote: > Please upgrade to borg version 1.1+ on the server for safer AES-CTR nonce > handling. > > Is there a way to suppress this message? No, sorry. You maybe could write a little script that filters out exactly these messages and then pipe borg's output through it. > Upgrading the server is on my list > but it will take a bit until I can do it so it would be nice to have less > output from backups (= no cron mails :-). Or, check borg's rc and only send the mails if rc != 0. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From giuseppe.arvati at gmail.com Thu Oct 26 05:21:28 2017 From: giuseppe.arvati at gmail.com (Giuseppe Arvati) Date: Thu, 26 Oct 2017 11:21:28 +0200 Subject: [Borgbackup] file size info Message-ID: <0a14b2aa-d49d-b364-b7b4-d0c46d8f3420@gmail.com> Hello, I just create a new repo borg init -e repokey /opt/mnt/borgbackup/ads.borg and than create 1 archive with only 1 file borg create -p --stats /opt/mnt/borgbackup/ads.borg::ads-init ads_orcl_daily_dmp_4 the file is 109GB but at the and of create command I get this stats ------------------------------------------------------------------------------ Archive name: ads-init Time (start): Thu, 2017-10-26 10:06:31 Time (end): Thu, 2017-10-26 11:05:46 Duration: 59 minutes 14.92 seconds Number of files: 1 Utilization of max. archive size: 0% ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 234.55 GB 206.44 GB 103.22 GB All archives: 234.55 GB 206.44 GB 103.22 GB Unique chunks Total chunks Chunk index: 44022 88051 Why the original size reported is about twice the real size of the file ? thank you giuseppe From tw at waldmann-edv.de Thu Oct 26 05:46:18 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 26 Oct 2017 11:46:18 +0200 Subject: [Borgbackup] file size info In-Reply-To: <0a14b2aa-d49d-b364-b7b4-d0c46d8f3420@gmail.com> References: <0a14b2aa-d49d-b364-b7b4-d0c46d8f3420@gmail.com> Message-ID: <335ce43a-1fce-0819-1a01-83d06d7c38cc@waldmann-edv.de> > Why the original size reported is about twice the real size of the file ? That looks like a bug, can you file it on github? Or at least give the python version, borg version, OS? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From giuseppe.arvati at gmail.com Thu Oct 26 06:05:21 2017 From: giuseppe.arvati at gmail.com (Giuseppe Arvati) Date: Thu, 26 Oct 2017 12:05:21 +0200 Subject: [Borgbackup] file size info In-Reply-To: <335ce43a-1fce-0819-1a01-83d06d7c38cc@waldmann-edv.de> References: <0a14b2aa-d49d-b364-b7b4-d0c46d8f3420@gmail.com> <335ce43a-1fce-0819-1a01-83d06d7c38cc@waldmann-edv.de> Message-ID: <592a4fa4-b8a2-6dde-ba27-eace1bb9df4c@gmail.com> Il 26/10/2017 11:46, Thomas Waldmann ha scritto: >> Why the original size reported is about twice the real size of the file ? > > That looks like a bug, can you file it on github? > > Or at least give the python version, borg version, OS? > sorry I forgot basic info [root at apamfs2 ~]# borg -V borg 1.1.1 CentOs 6 2.6.32-642.13.1.el6.x86_64 #1 SMP Wed Jan 11 20:56:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Python 3.4.5 (default, Jun 1 2017, 13:52:39) [GCC 4.4.7 20120313 (Red Hat 4.4.7-18)] on linux I'll try another time with a smaller file just to test if the problem persists. If yes I'll fill a bug report on github giuseppe From dirk at deimeke.net Thu Oct 26 10:01:26 2017 From: dirk at deimeke.net (Dirk Deimeke) Date: Thu, 26 Oct 2017 16:01:26 +0200 Subject: [Borgbackup] Accidentally deleted some files from a repository Message-ID: Hi! I accdidentally deleted some files from a backup repository and found a segment to be corrupt. What can I do next? Delete the repositories using this segment? Something else? If the answer is to delete the repository it would be ok, but if possible I like to avoid that. I am using borg 1.0.11 on CentOS 7.4 with Python 2.7.5. Thanks in advance for any hint. Cheers Dirk -- https://d5e.org/ From dirk at deimeke.net Thu Oct 26 10:45:54 2017 From: dirk at deimeke.net (Dirk Deimeke) Date: Thu, 26 Oct 2017 16:45:54 +0200 Subject: [Borgbackup] Accidentally deleted some files from a repository In-Reply-To: References: Message-ID: <76f62a4579055060ad37fc4ac8a4a0e2@deimeke.net> On 2017-10-26 16:01, Dirk Deimeke wrote: Hi! > I am using borg 1.0.11 on CentOS 7.4 with Python 2.7.5. Correction: It is CPython 3.4.5 Cheers Dirk -- https://d5e.org/ From voldemort.misc at gmail.com Thu Oct 26 20:12:27 2017 From: voldemort.misc at gmail.com (Lord Voldemort) Date: Fri, 27 Oct 2017 07:12:27 +0700 Subject: [Borgbackup] Repo sync efficiency In-Reply-To: References: Message-ID: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> Hello, I've tried to sync my borg repo to cloud storage after each time I create a borg archive. But my observation is that even with a small change on destination borg repo still changes alot in term of number of files changed (under data/ folder) not total size of repo and in result I need a large transmitted data. This is a example: - Borg output: Original size Compressed size Deduplicated size This archive: 3.24 MB 1.89 MB 70.14 kB All archives: 17.77 MB 13.31 MB 9.96 MB Unique chunks Total chunks Chunk index: 1548 2022 - 'rclone sync' (to sync my borg repo and google drive) output: Transferred: 9.721 MBytes (564.116 kBytes/s) Transferred: 6 Is there anything I can do about it? From tw at waldmann-edv.de Fri Oct 27 01:56:08 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 27 Oct 2017 07:56:08 +0200 Subject: [Borgbackup] Repo sync efficiency In-Reply-To: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> References: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> Message-ID: <693cc5c2-859a-2263-f97b-d45751818dc8@waldmann-edv.de> > I've tried to sync my borg repo to cloud storage after each time I > create a borg archive. But my observation is that even with a small > change on destination borg repo still changes alot in term of number of > files changed (under data/ folder) not total size of repo and in result > I need a large transmitted data. This is a example: > > - Borg output: > > ???????????????? Original size????? Compressed size??? Deduplicated size > This archive:????????? 3.24 MB????????????? 1.89 MB???????????? 70.14 kB > All archives:???????? 17.77 MB???????????? 13.31 MB????????????? 9.96 MB > > ???????????????? Unique chunks???????? Total chunks > Chunk index:????????????? 1548???????????????? 2022 > > > - 'rclone sync' (to sync my borg repo and google drive) output: > > Transferred:?? 9.721 MBytes (564.116 kBytes/s) > Transferred:??????????? 6 > > Is there anything I can do about it? This might be partly due to compact_segments, which is run automatically at the end of all major repo-writing activities. It will delete non-compact segment files and create new compact segment files. Guess you'ld see less changes with a repo in append-only mode, but OTOH that repo would always grow while being in that mode. Also, borg 1.1 might do less compaction if it decides it is not worth the effort. OTOH, borg 1.1 has way bigger segment files by default if you init a repo with it. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From plattrap at lholding.net Fri Oct 27 02:01:31 2017 From: plattrap at lholding.net (Lawrence Holding) Date: Fri, 27 Oct 2017 19:01:31 +1300 Subject: [Borgbackup] Accidentally deleted some files from a repository In-Reply-To: <76f62a4579055060ad37fc4ac8a4a0e2@deimeke.net> References: <76f62a4579055060ad37fc4ac8a4a0e2@deimeke.net> Message-ID: My thoughts. 1. First make a copy of the backup archive. 2. Run "borg check --repair ..." which should mark the corrupted segments as missing. 3. Run "borg create ..." which will replace any missing segments with the same data if a matching block is still on your disk. On 27 October 2017 at 03:45, Dirk Deimeke wrote: > On 2017-10-26 16:01, Dirk Deimeke wrote: > > Hi! > > I am using borg 1.0.11 on CentOS 7.4 with Python 2.7.5. >> > > Correction: It is CPython 3.4.5 > > > Cheers > > Dirk > > -- > https://d5e.org/ > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirk at deimeke.net Fri Oct 27 02:04:51 2017 From: dirk at deimeke.net (Dirk Deimeke) Date: Fri, 27 Oct 2017 08:04:51 +0200 Subject: [Borgbackup] Accidentally deleted some files from a repository In-Reply-To: References: <76f62a4579055060ad37fc4ac8a4a0e2@deimeke.net> Message-ID: <19bba1ad-a3be-b9fa-630b-677dab11241e@deimeke.net> On 27.10.2017 08:01, Lawrence Holding wrote: Hi Lawrence, > My thoughts. thank you. I was able to recover most of the data and will post a summary on the mailing list. Cheers Dirk -- https://d5e.org/ From dirk at deimeke.net Fri Oct 27 02:12:00 2017 From: dirk at deimeke.net (Dirk Deimeke) Date: Fri, 27 Oct 2017 08:12:00 +0200 Subject: [Borgbackup] Accidentally deleted some files from a repository In-Reply-To: References: Message-ID: On 26.10.2017 16:01, Dirk Deimeke wrote: Hi! Quick summary: I already did a "borg check --repair" which led to the following exception: --- $ borg check --repair /srv/borg/tigacorrupt 'check --repair' is an experimental feature that might result in data loss. Type 'YES' if you understand this and want to continue: YES Adding commit tag to segment 879209 Local Exception. Traceback (most recent call last): File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 2168, in main exit_code = archiver.run(args) File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 2104, in run return set_ec(func(args)) File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 107, in wrapper return method(self, args, repository=repository, **kwargs) File "/usr/lib64/python3.4/site-packages/borg/archiver.py", line 185, in do_check if not repository.check(repair=args.repair, save_space=args.save_space): File "/usr/lib64/python3.4/site-packages/borg/repository.py", line 476, in check self.io.write_commit() File "/usr/lib64/python3.4/site-packages/borg/repository.py", line 820, in write_commit fd = self.get_write_fd(no_new=True) File "/usr/lib64/python3.4/site-packages/borg/repository.py", line 680, in get_write_fd self._write_fd = open(self.segment_filename(self.segment), 'xb') FileNotFoundError: [Errno 2] No such file or directory: '/srv/borg/tigacorrupt/data/87/879210' Platform: Linux len.myown-it.com 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 Linux: CentOS Linux 7.4.1708 Core Borg: 1.0.11 Python: CPython 3.4.5 PID: 5622 CWD: /srv/borg sys.argv: ['/bin/borg', 'check', '--repair', '/srv/borg/tigacorrupt'] SSH_ORIGINAL_COMMAND: None --- Most important is the line "FileNotFoundError: [Errno 2] No such file or directory: '/srv/borg/tigacorrupt/data/87/879210'" As Marian told me by mail, I created the directory /srv/borg/tigacorrupt/data/87 and the next repair attempt ran through: --- borg check --repair /srv/borg/tigacorrupt 'check --repair' is an experimental feature that might result in data loss. Type 'YES' if you understand this and want to continue: YES Adding commit tag to segment 879209 Repository manifest not found! 219852 orphaned objects found! Archive consistency check complete, problems found. --- I started doing "new backups" now. Thanks all for your support. Cheers Dirk -- https://d5e.org/ From tw at waldmann-edv.de Fri Oct 27 05:31:36 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 27 Oct 2017 11:31:36 +0200 Subject: [Borgbackup] Accidentally deleted some files from a repository In-Reply-To: References: <76f62a4579055060ad37fc4ac8a4a0e2@deimeke.net> Message-ID: <378c3e36-484d-90c7-d951-b44336a9d41a@waldmann-edv.de> > 1. First make a copy of the backup archive. > 2. Run "borg check --repair ..." which should mark the corrupted > segments as missing. > 3. Run "borg create ..." which will replace any missing segments with > the same data if a matching block is still on your disk. 4. run borg check --repair again This will heal the chunks lists in a corrupted archive, if the missing chunk is present now again. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dirk at deimeke.net Fri Oct 27 06:27:58 2017 From: dirk at deimeke.net (Dirk Deimeke) Date: Fri, 27 Oct 2017 12:27:58 +0200 Subject: [Borgbackup] Accidentally deleted some files from a repository In-Reply-To: <378c3e36-484d-90c7-d951-b44336a9d41a@waldmann-edv.de> References: <76f62a4579055060ad37fc4ac8a4a0e2@deimeke.net> <378c3e36-484d-90c7-d951-b44336a9d41a@waldmann-edv.de> Message-ID: On 2017-10-27 11:31, Thomas Waldmann wrote: Hi Thomas, > This will heal the chunks lists in a corrupted archive, if the missing > chunk is present now again. thank you. It did not recover anything. Don't mind! I started over with a fresh repository. Cheers Dirk -- https://d5e.org/ From dastapov at gmail.com Mon Oct 30 10:57:37 2017 From: dastapov at gmail.com (Dmitry Astapov) Date: Mon, 30 Oct 2017 14:57:37 +0000 Subject: [Borgbackup] Repo sync efficiency In-Reply-To: <693cc5c2-859a-2263-f97b-d45751818dc8@waldmann-edv.de> References: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> <693cc5c2-859a-2263-f97b-d45751818dc8@waldmann-edv.de> Message-ID: Hi, Can I clarify something? append-only more repo would still delete unused segment files once the last archive that references then is pruned, would it not? On Fri, Oct 27, 2017 at 6:56 AM, Thomas Waldmann wrote: > > I've tried to sync my borg repo to cloud storage after each time I > > create a borg archive. But my observation is that even with a small > > change on destination borg repo still changes alot in term of number of > > files changed (under data/ folder) not total size of repo and in result > > I need a large transmitted data. This is a example: > > > > - Borg output: > > > > Original size Compressed size Deduplicated size > > This archive: 3.24 MB 1.89 MB 70.14 kB > > All archives: 17.77 MB 13.31 MB 9.96 MB > > > > Unique chunks Total chunks > > Chunk index: 1548 2022 > > > > > > - 'rclone sync' (to sync my borg repo and google drive) output: > > > > Transferred: 9.721 MBytes (564.116 kBytes/s) > > Transferred: 6 > > > > Is there anything I can do about it? > > This might be partly due to compact_segments, which is run automatically > at the end of all major repo-writing activities. It will delete > non-compact segment files and create new compact segment files. > > Guess you'ld see less changes with a repo in append-only mode, but OTOH > that repo would always grow while being in that mode. > > Also, borg 1.1 might do less compaction if it decides it is not worth > the effort. OTOH, borg 1.1 has way bigger segment files by default if > you init a repo with it. > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- D. Astapov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Mon Oct 30 20:03:03 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 31 Oct 2017 01:03:03 +0100 Subject: [Borgbackup] Repo sync efficiency In-Reply-To: References: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> <693cc5c2-859a-2263-f97b-d45751818dc8@waldmann-edv.de> Message-ID: > Can I clarify something? > > append-only more repo would still delete unused segment files once the > last archive that references then is pruned, would it not? A append-only mode repo looks like doing everything normally from the outside, so you can create and delete repos as usual. borg always appends new stuff (PUT, DELETE, COMMIT tags) at the end. The difference is that in append-only mode, there is no compact_segments that removes deleted / superceded PUT entries and rewrites non-compact segments into compact ones. So, the previous repo state is always conserved also when you only look at the first N segment files. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dastapov at gmail.com Tue Oct 31 09:12:43 2017 From: dastapov at gmail.com (Dmitry Astapov) Date: Tue, 31 Oct 2017 13:12:43 +0000 Subject: [Borgbackup] Repo sync efficiency In-Reply-To: References: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> <693cc5c2-859a-2263-f97b-d45751818dc8@waldmann-edv.de> Message-ID: So if I have append-only repo and created archive archive-A with ./file/A in it that occupied more than one segment (lets say those were segments 100 and 101), and then created more archives where that file in absent, and then 'borg prune'd repo::A, the following will happen: - Segment 100 (which contained data from ./file/A and nothing else) will be no longer needed and will be physically removed from disk - Segment 101 (which contained data from ./file/A + some other files as well) will be kept around and will not be compacted/rewritten to get rid of the chunk that corresponds to ./file/A Or am I wrong for segment 100 will be kept around even after archive-A is pruned? On Tue, Oct 31, 2017 at 12:03 AM, Thomas Waldmann wrote: > > Can I clarify something? > > > > append-only more repo would still delete unused segment files once the > > last archive that references then is pruned, would it not? > > A append-only mode repo looks like doing everything normally from the > outside, so you can create and delete repos as usual. > > borg always appends new stuff (PUT, DELETE, COMMIT tags) at the end. > > The difference is that in append-only mode, there is no compact_segments > that removes deleted / superceded PUT entries and rewrites non-compact > segments into compact ones. > > So, the previous repo state is always conserved also when you only look > at the first N segment files. > > > -- > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- D. Astapov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Wed Nov 1 10:46:37 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 1 Nov 2017 15:46:37 +0100 Subject: [Borgbackup] Repo sync efficiency In-Reply-To: References: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> <693cc5c2-859a-2263-f97b-d45751818dc8@waldmann-edv.de> Message-ID: <06702023-4688-5527-79a7-bedff445f4b8@waldmann-edv.de> On 10/31/2017 02:12 PM, Dmitry Astapov wrote: > So if I have append-only repo and created archive archive-A with > ./file/A in it that occupied more than one segment (lets say those were > segments 100 and 101), and then created more archives where that file in > absent, and then 'borg prune'd repo::A, the following will happen: > > - Segment 100 (which contained data from ./file/A and nothing else) will > be no longer needed and will be physically removed from disk > - Segment 101 (which contained data from ./file/A + some other files as > well) will be kept around and will not be compacted/rewritten to get rid > of the chunk that corresponds to ./file/A > > Or am I wrong for segment 100 will be kept around even after archive-A > is pruned? I append-only mode, it does not compact or delete any old segment files. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dastapov at gmail.com Wed Nov 1 11:42:48 2017 From: dastapov at gmail.com (Dmitry Astapov) Date: Wed, 1 Nov 2017 15:42:48 +0000 Subject: [Borgbackup] Repo sync efficiency In-Reply-To: <06702023-4688-5527-79a7-bedff445f4b8@waldmann-edv.de> References: <44d0b383-87f3-8f75-8beb-b065fa1ae0dd@gmail.com> <693cc5c2-859a-2263-f97b-d45751818dc8@waldmann-edv.de> <06702023-4688-5527-79a7-bedff445f4b8@waldmann-edv.de> Message-ID: I just re-read the docs. Sorry for all the silly questions: nice description of append-only was not in the docs back when I switched my repo to append-only mode (~1.5 years ago), and it did not occur to me to re-check the docs. Is there any way to disable just the compaction of the segments? I am keeping offsite copy of my repo in Amazon S3 and I want to minimize number of (old) files changed as they are being pushed to Glacier storage policy after a while and I incur higher costs for updating them. At the same time it would be nice to be able to purge old archives and have them removed. So, basically, I am looking for "append-or-delete-but-never-rewrite" mode. On Wed, Nov 1, 2017 at 2:46 PM, Thomas Waldmann wrote: > On 10/31/2017 02:12 PM, Dmitry Astapov wrote: > > So if I have append-only repo and created archive archive-A with > > ./file/A in it that occupied more than one segment (lets say those were > > segments 100 and 101), and then created more archives where that file in > > absent, and then 'borg prune'd repo::A, the following will happen: > > > > - Segment 100 (which contained data from ./file/A and nothing else) will > > be no longer needed and will be physically removed from disk > > - Segment 101 (which contained data from ./file/A + some other files as > > well) will be kept around and will not be compacted/rewritten to get rid > > of the chunk that corresponds to ./file/A > > > > Or am I wrong for segment 100 will be kept around even after archive-A > > is pruned? > > I append-only mode, it does not compact or delete any old segment files. > > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- D. Astapov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dac at conceptual-analytics.com Wed Nov 1 11:30:58 2017 From: dac at conceptual-analytics.com (Dave Cottingham) Date: Wed, 1 Nov 2017 11:30:58 -0400 Subject: [Borgbackup] Saving logging output to a log file Message-ID: I wanted borg to append the logging output to a log file, and I have succeeded in doing that, but my solution is so clunky I'm hoping someone can point me to a better solution. There doesn't seem to be any direct way to specify a log file to borg, but there is a way to specify a logging configuration file, which is to put the path in the environment variable BORG_LOGGING_CONF. So I do that. Then I took my best shot at making the minimal logging configuration file that just says "append everything to this file." Unfortunately, the result is 22 lines. I have attached it. Can anyone explain what I'm doing wrong? I mean, it works, but this seems silly. Thanks, Dave Cottingham -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: backup-neo-1.logconf Type: application/octet-stream Size: 339 bytes Desc: not available URL: From jasper at knockaert.nl Thu Nov 2 05:53:54 2017 From: jasper at knockaert.nl (Jasper Knockaert) Date: Thu, 02 Nov 2017 10:53:54 +0100 Subject: [Borgbackup] key export Message-ID: Hello If I use key export to backup the repository encryption key, what is the format of the output? Is the exported key still encrypted or not? Or to put it differently: in the case the exported key gets compromised, would one need the repository password the decrypt the archives or not? Best regards Jasper From roland at micite.net Thu Nov 2 07:30:15 2017 From: roland at micite.net (Roland van Laar) Date: Thu, 2 Nov 2017 12:30:15 +0100 Subject: [Borgbackup] key export In-Reply-To: References: Message-ID: Hi Jasper, The format of the output: http://borgbackup.readthedocs.io/en/stable/usage/key.html#borg-key-export Did you try the borg export command? Regular export is base 64. Other options are a qr-code and a checksum based format which is typeable. Regarding the comprised key: I don't know. My advice: Try to restore the backup on a different machine. A backup is only as good as its restore procedure. Regards, Roland On 02-11-17 10:53, Jasper Knockaert wrote: > Hello > > If I use key export to backup the repository encryption key, what is > the format of the output? Is the exported key still encrypted or not? > Or to put it differently: in the case the exported key gets > compromised, would one need the repository password the decrypt the > archives or not? > > Best regards > > > Jasper > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From jasper at knockaert.nl Thu Nov 2 15:33:15 2017 From: jasper at knockaert.nl (Jasper Knockaert) Date: Thu, 02 Nov 2017 20:33:15 +0100 Subject: [Borgbackup] key export In-Reply-To: References: Message-ID: <336F1884-8220-4B22-9206-23B810D05DAE@knockaert.nl> Hi Ronald Thank you for your reaction. But it is not really an answer to my question. Perhaps I formulated it poorly, but what I want to know is whether the exported key is encrypted or not. Put differently: can anyone with read access to the archive storage decrypt its contents after obtaining an exported key? Best regards Jasper On 2 Nov 2017, at 12:30, Roland van Laar via Borgbackup wrote: > Hi Jasper, > > The format of the output: > http://borgbackup.readthedocs.io/en/stable/usage/key.html#borg-key-export > > Did you try the borg export command? > > Regular export is base 64. > Other options are a qr-code and a checksum based format which is > typeable. > > Regarding the comprised key: > I don't know. > My advice: Try to restore the backup on a different machine. > A backup is only as good as its restore procedure. > > Regards, > > Roland > > On 02-11-17 10:53, Jasper Knockaert wrote: >> Hello >> >> If I use key export to backup the repository encryption key, what is >> the format of the output? Is the exported key still encrypted or not? >> Or to put it differently: in the case the exported key gets >> compromised, would one need the repository password the decrypt the >> archives or not? >> >> Best regards >> >> >> Jasper >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From giuseppe.arvati at gmail.com Fri Nov 3 03:23:32 2017 From: giuseppe.arvati at gmail.com (Giuseppe Arvati) Date: Fri, 3 Nov 2017 08:23:32 +0100 Subject: [Borgbackup] prune command stats Message-ID: <2c5ce750-3a3b-19ff-2f4d-f5002716118c@gmail.com> Hello, this is the output of a prune command ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size Deleted data: -4.00 TB -2.95 TB -399.54 MB All archives: 2.35 TB 1.74 TB 69.83 GB Unique chunks Total chunks Chunk index: 235474 7046007 ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size Deleted data: -7.92 TB -7.92 TB -678.57 MB All archives: 3.79 TB 3.79 TB 159.05 GB Unique chunks Total chunks Chunk index: 390283 9992006 ------------------------------------------------------------------------------ deleted data size, for original and compressed column, is bigger ( absolute value ) than all archives size Is this ok ? or should be "deleted" switched with "all archive" ? thank you giuseppe From tw at waldmann-edv.de Fri Nov 3 08:39:02 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 3 Nov 2017 13:39:02 +0100 Subject: [Borgbackup] prune command stats In-Reply-To: <2c5ce750-3a3b-19ff-2f4d-f5002716118c@gmail.com> References: <2c5ce750-3a3b-19ff-2f4d-f5002716118c@gmail.com> Message-ID: <99ca9e87-ef3e-498a-2a90-5d8c4a76f42c@waldmann-edv.de> > ?????????????????????? Original size????? Compressed size Deduplicated size > Deleted data:?????????????? -7.92 TB???????????? -7.92 TB -678.57 MB > All archives:??????????????? 3.79 TB????????????? 3.79 TB 159.05 GB That looks weird. Is borg check ok for your repo / archives? If yes, can you file an issue on github, giving way more information (like borg version, OS + version, CPU, ...). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From t.schutter at comcast.net Sun Nov 5 11:22:09 2017 From: t.schutter at comcast.net (Tom Schutter) Date: Sun, 5 Nov 2017 09:22:09 -0700 Subject: [Borgbackup] borg 1.0.11 trying to read files in /proc Message-ID: On one machine that I backup, borg is processing files in /proc which is causing it to run for an extremely long time, if not forever. But AFAICT, borg should not be opening files in /proc. After 24 hours I just give up and kill the borg process. Why is borg trying to open these files, and what can I do to prevent it? I had a brilliant idea that maybe there was a hard link to /proc from a directory that I was backing up, but "find / -samefile /proc" did not reveal any. Adding "--exclude /proc" or variations of this pattern did not solve the problem. borg 1.0.11 XUbuntu 17.10 # borg create\ borgbackup at pixel:takifugu::2017-11-05T00:06:13\ --exclude 'sh:/home/*/.adobe' --exclude 'sh:/home/*/.cache' --exclude 'sh:/home/*/.thumbnails'\ --exclude /root/.cache\ --exclude /var/cache --exclude /var/lock --exclude /var/run --exclude /var/tmp\ --compression lz4\ --stats --verbose\ /etc /home /opt /srv /root /usr/local /var proc/1/attr/apparmor/exec: [Errno 22] Invalid argument proc/1/attr/apparmor/prev: [Errno 22] Invalid argument proc/1/attr/exec: [Errno 22] Invalid argument proc/1/attr/fscreate: [Errno 22] Invalid argument proc/1/attr/keycreate: [Errno 22] Invalid argument proc/1/attr/prev: [Errno 22] Invalid argument proc/1/attr/selinux/context: [Errno 22] Invalid argument proc/1/attr/selinux/current: [Errno 22] Invalid argument proc/1/attr/selinux/exec: [Errno 22] Invalid argument proc/1/attr/selinux/fscreate: [Errno 22] Invalid argument proc/1/attr/selinux/keycreate: [Errno 22] Invalid argument proc/1/attr/selinux/prev: [Errno 22] Invalid argument proc/1/attr/selinux/sockcreate: [Errno 22] Invalid argument proc/1/attr/smack/context: [Errno 22] Invalid argument proc/1/attr/smack/current: [Errno 22] Invalid argument proc/1/attr/sockcreate: [Errno 22] Invalid argument proc/1/clear_refs: [Errno 22] Invalid argument proc/1/mem: [Errno 5] Input/output error proc/1/task/1/attr/apparmor/exec: [Errno 22] Invalid argument proc/1/task/1/attr/apparmor/prev: [Errno 22] Invalid argument proc/1/task/1/attr/exec: [Errno 22] Invalid argument proc/1/task/1/attr/fscreate: [Errno 22] Invalid argument proc/1/task/1/attr/keycreate: [Errno 22] Invalid argument proc/1/task/1/attr/prev: [Errno 22] Invalid argument proc/1/task/1/attr/selinux/context: [Errno 22] Invalid argument proc/1/task/1/attr/selinux/current: [Errno 22] Invalid argument proc/1/task/1/attr/selinux/exec: [Errno 22] Invalid argument proc/1/task/1/attr/selinux/fscreate: [Errno 22] Invalid argument proc/1/task/1/attr/selinux/keycreate: [Errno 22] Invalid argument proc/1/task/1/attr/selinux/prev: [Errno 22] Invalid argument proc/1/task/1/attr/selinux/sockcreate: [Errno 22] Invalid argument proc/1/task/1/attr/smack/context: [Errno 22] Invalid argument proc/1/task/1/attr/smack/current: [Errno 22] Invalid argument proc/1/task/1/attr/sockcreate: [Errno 22] Invalid argument proc/1/task/1/clear_refs: [Errno 22] Invalid argument proc/1/task/1/mem: [Errno 5] Input/output error proc/10/attr/apparmor/exec: [Errno 22] Invalid argument proc/10/attr/apparmor/prev: [Errno 22] Invalid argument ... and so on ... From tw at waldmann-edv.de Sun Nov 5 19:13:27 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 6 Nov 2017 01:13:27 +0100 Subject: [Borgbackup] borgbackup 1.1.2 bugfix release Message-ID: <7036f7cd-979a-6003-0da1-73ff49d25c13@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.1.2 details see url. cheers, thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From marcpope at me.com Tue Nov 7 10:41:54 2017 From: marcpope at me.com (Marc Pope) Date: Tue, 07 Nov 2017 15:41:54 +0000 (GMT) Subject: [Borgbackup] Question on Repository Best Practices Message-ID: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> I am slightly confused on the best practice for a repository: For simplicity, say I want to backup: /var /etc? ?(once a day, keeping the last 7 days) /data? ? (every hour, keeping 12 hours, 7 days, 4 weeks) Would it be best practice to use 1 or 2 repos? Is it ok to use 2 different repos per client? This will also be backing up to a remote server. Thanks! Marc Pope -------------- next part -------------- An HTML attachment was scrubbed... URL: From gait at ATComputing.nl Wed Nov 8 02:49:50 2017 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 8 Nov 2017 08:49:50 +0100 Subject: [Borgbackup] Question on Repository Best Practices In-Reply-To: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> References: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> Message-ID: Op 07-11-17 om 16:41 schreef Marc Pope: > Would it be best practice to use 1 or 2 repos? Is it ok to use 2 different repos per client? Hello Marc, I think you must use 2 repos, as 'borg prune' acts on repositories as a whole: borg prune - Prune repository archives according to specified rules Greetz, Gerrit From imperator at jedimail.de Wed Nov 8 03:47:24 2017 From: imperator at jedimail.de (Imperator) Date: Wed, 8 Nov 2017 09:47:24 +0100 Subject: [Borgbackup] Question on Repository Best Practices In-Reply-To: References: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> Message-ID: Hello, Am 08.11.2017 um 08:49 schrieb Gerrit A. Smit: > Op 07-11-17 om 16:41 schreef Marc Pope: >> Would it be best practice to use 1 or 2 repos? Is it ok to use 2 >> different repos per client? > Hello Marc, > > > I think you must use 2 repos, as 'borg prune' acts on repositories as > a whole: > > borg prune - Prune repository archives according to specified rules borg prune can filter by archive name. If Marc uses different naming patterns he can prune each on its own. Greetings Sascha From jost+lists at dimejo.at Wed Nov 8 03:54:17 2017 From: jost+lists at dimejo.at (Alex JOST) Date: Wed, 8 Nov 2017 09:54:17 +0100 Subject: [Borgbackup] Question on Repository Best Practices In-Reply-To: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> References: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> Message-ID: <0e4da4bc-1ba5-7ba0-a64a-15e68f50f388@dimejo.at> Am 07.11.2017 um 16:41 schrieb Marc Pope: > I am slightly confused on the best practice for a repository: > > For simplicity, say I want to backup: > > /var /etc? ?(once a day, keeping the last 7 days) > /data? ? (every hour, keeping 12 hours, 7 days, 4 weeks) > > Would it be best practice to use 1 or 2 repos? Is it ok to use 2 > different repos per client? This will also be backing up to a remote > server. You can start 2 backups with 2 different prefixes. That way you can enforce 2 different prune policies. borg create ::var_{now} /var /etc borg create ::data_{now} /data borg prune --keep-daily=7 --prefix var_ borg prune --keep-hourly=12 --keep-daily=7 --keep-weekly=4 --prefix data_ Of course you can use 2 different repositories as well. Its' up to you, whatever fits you best. Keep in mind though that you loose the advantage of deduplication across all data, which will probably make 2 separated repositories bigger than 1 combined repository. -- Alex JOST From gait at ATComputing.nl Wed Nov 8 03:55:44 2017 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 8 Nov 2017 09:55:44 +0100 Subject: [Borgbackup] Question on Repository Best Practices In-Reply-To: References: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> Message-ID: <025d7b42-86a4-137d-bcc3-6d92310c7938@ATComputing.nl> Op 08-11-17 om 09:47 schreef Imperator: > borg prune can filter by archive name. Please tell me how! THX, Gerrit From imperator at jedimail.de Wed Nov 8 04:17:47 2017 From: imperator at jedimail.de (Imperator) Date: Wed, 8 Nov 2017 10:17:47 +0100 Subject: [Borgbackup] Question on Repository Best Practices In-Reply-To: <025d7b42-86a4-137d-bcc3-6d92310c7938@ATComputing.nl> References: <67dec6e0-e4e3-4dff-a899-daede681c4d9@me.com> <025d7b42-86a4-137d-bcc3-6d92310c7938@ATComputing.nl> Message-ID: <1a138725-b20a-ccee-fc62-12224d9f3b1e@jedimail.de> Hi Gerrit, Am 08.11.2017 um 09:55 schrieb Gerrit A. Smit: > Op 08-11-17 om 09:47 schreef Imperator: >> borg prune can filter by archive name. > Please tell me how! > see "Archive filters": https://borgbackup.readthedocs.io/en/stable/usage/prune.html Greetings Sascha From alleyoopster at gmail.com Thu Nov 9 23:50:30 2017 From: alleyoopster at gmail.com (Daniel Phillips) Date: Fri, 10 Nov 2017 06:50:30 +0200 Subject: [Borgbackup] Backup not completing Message-ID: Hi, Server: Rasp Pi2 with Rasbian (Stretch) borg 1.0.9 Client: Arch Linux, borg 1.1.1, Repository size 1.2TB I've been running an automated borg job to a local Rasp Pi for each of my machines. One of them has stopped backing up as in the backup never seems to complete and there are no error messages. A verify on the repo looks good. I notice there is not much CPU or disk activity on either machine during the backup. Is there a way of seeing progress aside from the --list and --stats options. The script is I am using is : #!/bin/bash REPOSITORY=pi at 192.168.0.220:/media/nas/backups/hornswaggle -- borg create --list --stats --exclude-caches --exclude-if-present noborgbackup \ $REPOSITORY::'{hostname}-{now:%Y-%m-%d_%T}' \ /home \ /var \ /etc \ --exclude '/home/*/.cache' \ --exclude '/var/lock' \ --exclude '/var/run' \ --exclude '/var/tmp' \ --exclude '/var/cache' \ borg prune -v $REPOSITORY --prefix '{hostname}-' \ --keep-hourly=2 --keep-daily=7 --keep-weekly=4 --keep-monthly=6 Any help troubleshooting the issue would be very much appreciated. Regards, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri Nov 10 08:28:22 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 10 Nov 2017 14:28:22 +0100 Subject: [Borgbackup] Backup not completing In-Reply-To: References: Message-ID: <8d09e98b-39be-868d-8457-4fb54fe3ff4b@waldmann-edv.de> > I've been running an automated borg job to a local Rasp Pi for each of > my machines. One of them has stopped backing up as in the backup never > seems to complete and there are no error messages. Invoke it manually on the console and use --list to see what it is doing. If this is your first backup after switching the client from 1.0 to 1.1, a longer execution time is expected, see changelog. BTW, your repo is rather big for the limited resources (RAM esp.) of a raspi, keep an eye on memory usage. raspi performance is not great when operating normally and likely gets very bad once it runs out of physical RAM and starts swapping. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From alleyoopster at gmail.com Fri Nov 10 11:59:53 2017 From: alleyoopster at gmail.com (Daniel Phillips) Date: Fri, 10 Nov 2017 18:59:53 +0200 Subject: [Borgbackup] Backup not completing In-Reply-To: <8d09e98b-39be-868d-8457-4fb54fe3ff4b@waldmann-edv.de> References: <8d09e98b-39be-868d-8457-4fb54fe3ff4b@waldmann-edv.de> Message-ID: Thanks for the reply. The memory usage is good (all processes under 600MB) and swap is off on pi. I tried running with --list --stats for 2 hours and nothing seems to be happening and no feedback in console. The CPU usage for borg is 0% on the Pi and on the client. I've also noticed the other client is not backing up, but I need to investigate that further before suggesting it is the same problem. On 10/11/17 15:28, Thomas Waldmann wrote: >> I've been running an automated borg job to a local Rasp Pi for each of >> my machines. One of them has stopped backing up as in the backup never >> seems to complete and there are no error messages. > Invoke it manually on the console and use --list to see what it is doing. > > If this is your first backup after switching the client from 1.0 to 1.1, > a longer execution time is expected, see changelog. > > > BTW, your repo is rather big for the limited resources (RAM esp.) of a > raspi, keep an eye on memory usage. > > raspi performance is not great when operating normally and likely gets > very bad once it runs out of physical RAM and starts swapping. > > From plattrap at lholding.net Fri Nov 10 13:24:58 2017 From: plattrap at lholding.net (Lawrence Holding) Date: Sat, 11 Nov 2017 07:24:58 +1300 Subject: [Borgbackup] Backup not completing In-Reply-To: References: <8d09e98b-39be-868d-8457-4fb54fe3ff4b@waldmann-edv.de> Message-ID: Does a smaller backup from the arch machine work? And adding the ?debug trace opinion? > On 11/11/2017, at 05:59, Daniel Phillips wrote: > > Thanks for the reply. > > The memory usage is good (all processes under 600MB) and swap is off on pi. > > I tried running with --list --stats for 2 hours and nothing seems to be > happening and no feedback in console. The CPU usage for borg is 0% on > the Pi and on the client. > > I've also noticed the other client is not backing up, but I need to > investigate that further before suggesting it is the same problem. > > > On 10/11/17 15:28, Thomas Waldmann wrote: >>> I've been running an automated borg job to a local Rasp Pi for each of >>> my machines. One of them has stopped backing up as in the backup never >>> seems to complete and there are no error messages. >> Invoke it manually on the console and use --list to see what it is doing. >> >> If this is your first backup after switching the client from 1.0 to 1.1, >> a longer execution time is expected, see changelog. >> >> >> BTW, your repo is rather big for the limited resources (RAM esp.) of a >> raspi, keep an eye on memory usage. >> >> raspi performance is not great when operating normally and likely gets >> very bad once it runs out of physical RAM and starts swapping. >> >> > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From alleyoopster at gmail.com Fri Nov 10 14:04:55 2017 From: alleyoopster at gmail.com (Daniel Phillips) Date: Fri, 10 Nov 2017 21:04:55 +0200 Subject: [Borgbackup] Backup not completing In-Reply-To: References: <8d09e98b-39be-868d-8457-4fb54fe3ff4b@waldmann-edv.de> Message-ID: <533bfda8-2dd8-8d35-1236-7b11ee691d35@gmail.com> A smaller backup seems to stop at "TAM-verified manifest" (waited about 10 mins) ?sudo borg create --debug --list --stats --exclude-caches? --exclude-if-present noborgbackup pi at 192.168.0.220:/media/nas/backups/hornswaggle::'{hostname}-{now:%Y-%m-%d_%T}'? /etc using builtin fallback logging configuration 35 self tests completed in 0.24 seconds SSH command line: ['ssh', 'pi at 192.168.0.220', 'borg', 'serve', '--umask=077', '--debug'] Remote: using builtin fallback logging configuration TAM-verified manifest Backup to a new test repo on pi works (with a warning about security due to old version of borg on server) Dan On 10/11/17 20:24, Lawrence Holding wrote: > Does a smaller backup from the arch machine work? > > And adding the ?debug trace opinion? > > >> On 11/11/2017, at 05:59, Daniel Phillips wrote: >> >> Thanks for the reply. >> >> The memory usage is good (all processes under 600MB) and swap is off on pi. >> >> I tried running with --list --stats for 2 hours and nothing seems to be >> happening and no feedback in console. The CPU usage for borg is 0% on >> the Pi and on the client. >> >> I've also noticed the other client is not backing up, but I need to >> investigate that further before suggesting it is the same problem. >> >> >> On 10/11/17 15:28, Thomas Waldmann wrote: >>>> I've been running an automated borg job to a local Rasp Pi for each of >>>> my machines. One of them has stopped backing up as in the backup never >>>> seems to complete and there are no error messages. >>> Invoke it manually on the console and use --list to see what it is doing. >>> >>> If this is your first backup after switching the client from 1.0 to 1.1, >>> a longer execution time is expected, see changelog. >>> >>> >>> BTW, your repo is rather big for the limited resources (RAM esp.) of a >>> raspi, keep an eye on memory usage. >>> >>> raspi performance is not great when operating normally and likely gets >>> very bad once it runs out of physical RAM and starts swapping. >>> >>> >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup From alleyoopster at gmail.com Sat Nov 11 01:02:16 2017 From: alleyoopster at gmail.com (Daniel Phillips) Date: Sat, 11 Nov 2017 08:02:16 +0200 Subject: [Borgbackup] Backup not completing In-Reply-To: <533bfda8-2dd8-8d35-1236-7b11ee691d35@gmail.com> References: <8d09e98b-39be-868d-8457-4fb54fe3ff4b@waldmann-edv.de> <533bfda8-2dd8-8d35-1236-7b11ee691d35@gmail.com> Message-ID: <1b4c5f79-fd0a-6fa6-82f3-d4e6ef12a961@gmail.com> Downgraded to 1.0.11-1 - backup works. Is this a bug or is there something I need to do to be able to use the latest version? Dan On 10/11/17 21:04, Daniel Phillips wrote: > A smaller backup seems to stop at "TAM-verified manifest" (waited about > 10 mins) > > ?sudo borg create --debug --list --stats --exclude-caches? > --exclude-if-present noborgbackup > pi at 192.168.0.220:/media/nas/backups/hornswaggle::'{hostname}-{now:%Y-%m-%d_%T}'? > /etc > > using builtin fallback logging configuration > 35 self tests completed in 0.24 seconds > SSH command line: ['ssh', 'pi at 192.168.0.220', 'borg', 'serve', > '--umask=077', '--debug'] > Remote: using builtin fallback logging configuration > TAM-verified manifest > > Backup to a new test repo on pi works (with a warning about security due > to old version of borg on server) > > Dan > > On 10/11/17 20:24, Lawrence Holding wrote: >> Does a smaller backup from the arch machine work? >> >> And adding the ?debug trace opinion? >> >> >>> On 11/11/2017, at 05:59, Daniel Phillips wrote: >>> >>> Thanks for the reply. >>> >>> The memory usage is good (all processes under 600MB) and swap is off on pi. >>> >>> I tried running with --list --stats for 2 hours and nothing seems to be >>> happening and no feedback in console. The CPU usage for borg is 0% on >>> the Pi and on the client. >>> >>> I've also noticed the other client is not backing up, but I need to >>> investigate that further before suggesting it is the same problem. >>> >>> >>> On 10/11/17 15:28, Thomas Waldmann wrote: >>>>> I've been running an automated borg job to a local Rasp Pi for each of >>>>> my machines. One of them has stopped backing up as in the backup never >>>>> seems to complete and there are no error messages. >>>> Invoke it manually on the console and use --list to see what it is doing. >>>> >>>> If this is your first backup after switching the client from 1.0 to 1.1, >>>> a longer execution time is expected, see changelog. >>>> >>>> >>>> BTW, your repo is rather big for the limited resources (RAM esp.) of a >>>> raspi, keep an eye on memory usage. >>>> >>>> raspi performance is not great when operating normally and likely gets >>>> very bad once it runs out of physical RAM and starts swapping. >>>> >>>> >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup > From ndbecker2 at gmail.com Sat Nov 11 08:23:58 2017 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 11 Nov 2017 13:23:58 +0000 Subject: [Borgbackup] downgrade OK? Message-ID: I've been running backups with 1.1.0 on client and server (linux). If I downgrade client and server to 1.0.11, will I be able to continue backups using this same repo? -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Sat Nov 11 18:17:26 2017 From: public at enkore.de (Marian Beermann) Date: Sun, 12 Nov 2017 00:17:26 +0100 Subject: [Borgbackup] downgrade OK? In-Reply-To: References: Message-ID: yes From alleyoopster at gmail.com Tue Nov 14 06:04:18 2017 From: alleyoopster at gmail.com (Daniel Phillips) Date: Tue, 14 Nov 2017 13:04:18 +0200 Subject: [Borgbackup] Backup not completing In-Reply-To: <1b4c5f79-fd0a-6fa6-82f3-d4e6ef12a961@gmail.com> References: <8d09e98b-39be-868d-8457-4fb54fe3ff4b@waldmann-edv.de> <533bfda8-2dd8-8d35-1236-7b11ee691d35@gmail.com> <1b4c5f79-fd0a-6fa6-82f3-d4e6ef12a961@gmail.com> Message-ID: <2e0a2ad1-e40d-cf5f-29e6-9d1f3f128691@gmail.com> Hi, Currently unable to upgrade from 1.0.11.1 as it prevents backups to existing repos. Bug? Dan On 11/11/17 08:02, Daniel Phillips wrote: > Downgraded to 1.0.11-1 - backup works. > > Is this a bug or is there something I need to do to be able to use the > latest version? > > Dan > > On 10/11/17 21:04, Daniel Phillips wrote: >> A smaller backup seems to stop at "TAM-verified manifest" (waited about >> 10 mins) >> >> ?sudo borg create --debug --list --stats --exclude-caches? >> --exclude-if-present noborgbackup >> pi at 192.168.0.220:/media/nas/backups/hornswaggle::'{hostname}-{now:%Y-%m-%d_%T}'? >> /etc >> >> using builtin fallback logging configuration >> 35 self tests completed in 0.24 seconds >> SSH command line: ['ssh', 'pi at 192.168.0.220', 'borg', 'serve', >> '--umask=077', '--debug'] >> Remote: using builtin fallback logging configuration >> TAM-verified manifest >> >> Backup to a new test repo on pi works (with a warning about security due >> to old version of borg on server) >> >> Dan >> >> On 10/11/17 20:24, Lawrence Holding wrote: >>> Does a smaller backup from the arch machine work? >>> >>> And adding the ?debug trace opinion? >>> >>> >>>> On 11/11/2017, at 05:59, Daniel Phillips wrote: >>>> >>>> Thanks for the reply. >>>> >>>> The memory usage is good (all processes under 600MB) and swap is off on pi. >>>> >>>> I tried running with --list --stats for 2 hours and nothing seems to be >>>> happening and no feedback in console. The CPU usage for borg is 0% on >>>> the Pi and on the client. >>>> >>>> I've also noticed the other client is not backing up, but I need to >>>> investigate that further before suggesting it is the same problem. >>>> >>>> >>>> On 10/11/17 15:28, Thomas Waldmann wrote: >>>>>> I've been running an automated borg job to a local Rasp Pi for each of >>>>>> my machines. One of them has stopped backing up as in the backup never >>>>>> seems to complete and there are no error messages. >>>>> Invoke it manually on the console and use --list to see what it is doing. >>>>> >>>>> If this is your first backup after switching the client from 1.0 to 1.1, >>>>> a longer execution time is expected, see changelog. >>>>> >>>>> >>>>> BTW, your repo is rather big for the limited resources (RAM esp.) of a >>>>> raspi, keep an eye on memory usage. >>>>> >>>>> raspi performance is not great when operating normally and likely gets >>>>> very bad once it runs out of physical RAM and starts swapping. >>>>> >>>>> >>>> _______________________________________________ >>>> Borgbackup mailing list >>>> Borgbackup at python.org >>>> https://mail.python.org/mailman/listinfo/borgbackup > From maurice.libes at osupytheas.fr Wed Nov 15 06:23:22 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Wed, 15 Nov 2017 12:23:22 +0100 Subject: [Borgbackup] BorgBackup in a french congress Message-ID: Hi to all [sorry I corrected a bad link below] for information I present a communication on BorgBackup in a french national congress of System Adminitrators call Jres (https://www.jres.org/) ( JRes which means network days) ) nothing you don't know :-), it's a presentation of general features of borg, but we've made some perf comparison with backupPC, it will make some publicity for Borg the slides and paper will be available on the congress site - https://www.jres.org/ - https://www.jres.org/en for english version the communication is webcasted tomorrow on thursday 16th at 2:40 pm see you? for some more questions ML -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2971 bytes Desc: Signature cryptographique S/MIME URL: From maurice.libes at osupytheas.fr Wed Nov 15 06:15:08 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Wed, 15 Nov 2017 12:15:08 +0100 Subject: [Borgbackup] BorgBackup in a frech congress Message-ID: Hi to all for information I present a communication on BorgBackup in a french national congress of System Adminitrators call Jres (https://www.jres.org/) ( JRes which means network days) ) nothing you don't know :-), it's a presentation of general features of borg, but we've made some perf comparison with backupPC, it will make some publicity for Borg the slides and paper will be available on the congress site (http://jres.org) the communication is webcasted tomorrow on thursday 16th at 2:40 pm see you? for some more questions ML -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2971 bytes Desc: Signature cryptographique S/MIME URL: From ndbecker2 at gmail.com Thu Nov 16 07:09:50 2017 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 16 Nov 2017 12:09:50 +0000 Subject: [Borgbackup] borg failed integrity check Message-ID: After upgrading client and server from fedora 26 to fedora 27, I got my first failure today. The borg versions may have been switched back and forth a between 1.0.? and 1.1.12 because 1) the versions on f26 and f27 are different and 2) I locally installed (via pip install --user) version 1.1.12, which I think is being picked up via ssh to the server. Cache integrity data not available: old Borg version modified the cache. Cache integrity data not available: old Borg version modified the cache. Cache integrity data not available: old Borg version modified the cache. Keeping archive: nbecker2-2017-11-15 Wed, 2017-11-15 06:44:53 [2fd14d3d80291c1e3e19ad27839574205a2da99b29ee7ac2fe1aadded 3b825f4] Keeping archive: nbecker2-2017-11-14 Tue, 2017-11-14 07:00:25 [f0423d3cdcba00b4cf3bca06073921f9b1fe07561b06370ddb47275ce 46d267f] Keeping archive: nbecker2-2017-11-13 Mon, 2017-11-13 06:50:06 [5dfd59e2e080f7a5cd5837f79db5ec6c26546005e85c991a42c06b603 8476383] Keeping archive: nbecker2-2017-11-10 Fri, 2017-11-10 06:58:32 [1071d6f308d7249972a66fcaea6b1fdeb8c0a9c753d3d97f8439df756 2ce9764] Keeping archive: nbecker2-2017-11-09 Thu, 2017-11-09 06:39:12 [491593bc35e4df15af2b236dd8d76eca82e3334739b7910b5308c6ef9 534b97b] Keeping archive: nbecker2-2017-11-08 Wed, 2017-11-08 06:47:31 [4c18c78bbf87b19ec6345a20df7e1be1884105df5899cbc6015427daf 7cbf320] Keeping archive: nbecker2-2017-11-07 Tue, 2017-11-07 06:58:02 [a7a507c1844d1f530f5350579bf40b12941638692dabc1b98aa6415cf d90455e] Pruning archive: nbecker2-2017-11-06 Mon, 2017-11-06 06:51:02 [72991774bdaf67be33003fe769f8144c387a3641b98dd10f51f3117ad 37cfe1f] (1/1) Keeping archive: nbecker2-2017-11-03 Fri, 2017-11-03 06:52:51 [d594fd99e4dfe6e3f4821a1bc825f0f29984f3dd5017fad32e6103fdb 4c40647] Keeping archive: nbecker2-2017-10-31 Tue, 2017-10-31 06:58:32 [cef483d333eb94027f5aec4684e70bce5d53211297a53292e433bb5fa 8a67e60] Keeping archive: nbecker2-2017-10-27 Fri, 2017-10-27 06:35:09 [5124fc7a0484f7021b75741ae754c08f384c000ca51ec8298a12bad84 3116323] Keeping archive: nbecker2-2017-10-20 Fri, 2017-10-20 07:00:43 [bbcfc61158346b7a4b2b8163e5e38af1afb6b98f71250520e10d5768b beb2dda] Keeping archive: nbecker2-2017-10-13 Fri, 2017-10-13 06:54:40 [0be49afef2150584b51ea5b0d962fbf76ac958c684145d9dae54211e1 c0b8c9e] Keeping archive: nbecker2-2017-09-28 Thu, 2017-09-28 06:59:47 [ad319f85ed1e0dfe75fd363b64586a933c980fb362a1e0ab9aa386b3c c668cc6] Keeping archive: nbecker2-2017-08-31 Thu, 2017-08-31 08:09:57 [fdd67dcb0ea2c618443d8164e8cc331d9711ad13d2cb691916fa33450 bfb9e11] Keeping archive: nbecker2-2017-07-31 Mon, 2017-07-31 07:03:29 [3247d30272aacd36a7f63f814ae940b6b27cb30f5d8d66f99445f21fe e0a9aa0] Keeping archive: nbecker2-2017-06-30 Fri, 2017-06-30 08:34:03 [9f41c8d502872b93e9a3fea94a4d495785d21a3fed0a7ad4bfec21971 554a4df] Keeping archive: nbecker2-2017-05-31 Wed, 2017-05-31 06:53:48 [dce7cf35f14ccc171e75462ca62eb9fdea99c9d0daeb3a12861c379f4 1c65546] File failed integrity check: /home/nbecker/.cache/borg/234c85641ffd393726ad1d0e3adadf78315db6e53725a167bd1cebb7c99ea19f/files Traceback (most recent call last): File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 4073, in main exit_code = archiver.run(args) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 4001, in run return set_ec(func(args)) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 148, in wrapper return method(self, args, repository=repository, **kwargs) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 543, in do_create create_inner(archive, cache) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 506, in create_inner read_special=args.read_special, dry_run=dry_run, st=st) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 617, in _process read_special=read_special, dry_run=dry_run) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 617, in _process read_special=read_special, dry_run=dry_run) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 617, in _process read_special=read_special, dry_run=dry_run) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archiver.py", line 594, in _process status = archive.process_file(path, st, cache, self.ignore_inode, self.files_cache_mode) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/archive.py", line 972, in process_file ids = cache.file_known_and_unchanged(path_hash, st, ignore_inode, files_cache_mode) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/cache.py", line 910, in file_known_and_unchanged self._read_files() File "/home/nbecker/.local/lib/python3.6/site-packages/borg/cache.py", line 516, in _read_files return File "/home/nbecker/.local/lib/python3.6/site-packages/borg/crypto/file_integrity.py", line 193, in __exit__ self.hash_part('final', is_final=True) File "/home/nbecker/.local/lib/python3.6/site-packages/borg/crypto/file_integrity.py", line 188, in hash_part raise FileIntegrityError(self.path) borg.crypto.file_integrity.FileIntegrityError: File failed integrity check: /home/nbecker/.cache/borg/234c85641ffd393726ad1d0e3adadf78315 db6e53725a167bd1cebb7c99ea19f/files Platform: Linux nbecker2 4.13.12-300.fc27.x86_64 #1 SMP Wed Nov 8 16:38:01 UTC 2017 x86_64 x86_64 Linux: Fedora 27 Twenty Seven Borg: 1.1.2 Python: CPython 3.6.3 PID: 7430 CWD: /home/nbecker sys.argv: ['/home/nbecker/.local/bin/borg', 'create', '--progress', '-v', '--stats', '-C', 'lz4', '::{hostname}-{now:%Y-%m-%d}', '/home/n becker', '--exclude', '/home/nbecker/.cache', '--exclude', '*.pyc', '--exclude', '/home/nbecker/.local/MATLAB'] SSH_ORIGINAL_COMMAND: None -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Nov 16 08:24:17 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 16 Nov 2017 14:24:17 +0100 Subject: [Borgbackup] borg failed integrity check In-Reply-To: References: Message-ID: Hi Neal, > The borg versions may have been switched back and forth a between 1.0.?> and 1.1.12 because 1) the versions on f26 and f27 are different So it is 1.0.x and 1.1.2. > Cache integrity data not available: old Borg version modified the cache. This is expected when switching between 1.0 to 1.1 and no problem, just a notice. > Keeping archive: nbecker2-2017-11-15 ?????????????????Wed, 2017-11-15 OK, so your script runs prune first and then create. > File failed integrity check: > /home/nbecker/.cache/borg/234c85641ffd393726ad1d0e3adadf78315db6e53725a167bd1cebb7c99ea19f/files OK, so it thinks the "files" cache is (was) corrupted for that repo. If you start borg create again, do you get same error? If you do, what is ls -l /home/nbecker/.cache/borg/234c85641ffd393726ad1d0e3adadf78315db6e53725a167bd1cebb7c99ea19f/files ? Do you always run borg as the same user? > line 910, in file_known_and_unchanged > ???self._read_files() OK, so this is borg create demand-loading the files cache - and it is raising IntegrityError because the hash does not match the files content read from disk. There could be different reasons for this: - you discovered a bug (possible, but not extremely likely as you are the first reporting this but not the first using 1.1.2) - the "files" cache on disk was somehow corrupted (content modified, truncated / appended, whatever), so it does not match the stored hash any more - the hash was corrupted Corruption could be due to: - disk errors - transmission errors - RAM errors The FAQ has some hints about checking your hardware. The hash we use is a strong hash, so it could even be it detects errors that have otherwise not been detected by weaker mechanisms. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From ndbecker2 at gmail.com Thu Nov 16 08:31:23 2017 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 16 Nov 2017 13:31:23 +0000 Subject: [Borgbackup] borg failed integrity check In-Reply-To: References: Message-ID: Well I rm'd .cache/borg on the client, and reran backup successfully On Thu, Nov 16, 2017 at 8:24 AM Thomas Waldmann wrote: > Hi Neal, > > > The borg versions may have been switched back and forth a between 1.0.?> > and 1.1.12 because 1) the versions on f26 and f27 are different > > So it is 1.0.x and 1.1.2. > > > Cache integrity data not available: old Borg version modified the cache. > This is expected when switching between 1.0 to 1.1 and no problem, just > a notice. > > > Keeping archive: nbecker2-2017-11-15 Wed, 2017-11-15 > > OK, so your script runs prune first and then create. > > > File failed integrity check: > > > /home/nbecker/.cache/borg/234c85641ffd393726ad1d0e3adadf78315db6e53725a167bd1cebb7c99ea19f/files > > OK, so it thinks the "files" cache is (was) corrupted for that repo. > > If you start borg create again, do you get same error? > > If you do, what is ls -l > > /home/nbecker/.cache/borg/234c85641ffd393726ad1d0e3adadf78315db6e53725a167bd1cebb7c99ea19f/files > ? > > Do you always run borg as the same user? > > > line 910, in file_known_and_unchanged > > self._read_files() > > OK, so this is borg create demand-loading the files cache - and it is > raising IntegrityError because the hash does not match the files content > read from disk. > > There could be different reasons for this: > > - you discovered a bug (possible, but not extremely likely as you are > the first reporting this but not the first using 1.1.2) > > - the "files" cache on disk was somehow corrupted (content modified, > truncated / appended, whatever), so it does not match the stored hash > any more > > - the hash was corrupted > > Corruption could be due to: > - disk errors > - transmission errors > - RAM errors > > The FAQ has some hints about checking your hardware. > > The hash we use is a strong hash, so it could even be it detects errors > that have otherwise not been detected by weaker mechanisms. > > > Cheers, Thomas > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbass at kenbass.com Thu Nov 16 09:59:38 2017 From: kbass at kenbass.com (Ken Bass) Date: Thu, 16 Nov 2017 09:59:38 -0500 Subject: [Borgbackup] VM backup issues Message-ID: I am trying to backup VM images (that are LVM block device based) and have run into issues. 1) After looking at the some of the code, I realized that the --read-special is not doing what I thought it would do. I had expected that if a VM had not been powered on or touched in any way since the last backup it would not 'chunk' (if that is the proper term) the file during a backup. All my attempts at ignoring ctime,mtime, etc made no difference. The documentation is confusing - particularly the FAQ and the 'I am seeing ?A? (added) status for an unchanged file!?' section. It says 'If you want to avoid unnecessary chunking, just create or touch a small or empty file in your backup source file set (so that one has the latest mtime, not your 50GB VM disk image) and, if you do snapshots, do the snapshot after that.' I found the above to not be true when using the read-special flag. 2) When I create a backup my VM, I create a temp directory, symlink the VM block device as well as some regular files. I also symlink any file based images related to the VM (ie; .img or .iso). I am finding that the other non block files are only backing up the symlinks. How do I get it to backup the file, not the symlink? I am having to resort to bind mounting non-block device files because of this. Is this the only solution? 3) Because of the chunking issue, I took a different approach. I wrote a wrapper that first does a borg list --json --last 1 to parse the archive 'start' time. I am assuming this tells me the time of the most recent backup. I then check, via os.stat, if any of the VM related images have been modified since them (I am using mtime). If not, I skip the backup as unnecessary. I believe this will work as long as when I prune I am careful to use --last 1 so I don't delete VM images that haven't been powered on for a while. *I think I found a bug doing this*. I created my tmp directory and os.chdir() into it. I was using 'borg create ... *' via a subprocess.check_output(). I think due to safety checks, the '*' was passed literately rather than globbing. I saw the following: *: [Errno 2] No such file or directory: '*' ------------------------------------------------------------------------------ Archive name: test123-11-15-2017-23:02:13 Archive fingerprint: ccc92e6658af7384e8d38ac46bdd2999ab0ebc7538f8f2d0234e9302f966aaa6 Time (start): Wed, 2017-11-15 23:02:14 Time (end):?? Wed, 2017-11-15 23:02:14 Duration: 0.01 seconds Number of files: 0 Utilization of max. archive size: 0% ------------------------------------------------------------------------------ ?????????????????????? Original size????? Compressed size Deduplicated size This archive:????????????????? 534 B??????????????? 503 B??????????????? 503 B All archives:??????????????? 6.44 GB??????????? 579.06 MB???????????? 92.35 MB ?????????????????????? Unique chunks???????? Total chunks Chunk index:????????????????????? 68????????????????? 981 ------------------------------------------------------------------------------ terminating with warning status, rc 1 It returned an rc of 1, did not backup any files, but it created an entry as if the backup was complete. Why was this a problem? Because above when I use the 'borg list --json --last 1' to see when the last backup was done, it appears a backup was done when it really wasn't. Make sense? From public at enkore.de Thu Nov 16 12:31:17 2017 From: public at enkore.de (Marian Beermann) Date: Thu, 16 Nov 2017 18:31:17 +0100 Subject: [Borgbackup] borg failed integrity check In-Reply-To: References: Message-ID: <38bd7ea3-d203-e503-accb-5faad75206f8@enkore.de> Files cache could be another spot were am integrity error should cause a soft failure (discard data, warn, and continue) instead of a hard failure (crash). -M From tw at waldmann-edv.de Thu Nov 16 12:51:31 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 16 Nov 2017 18:51:31 +0100 Subject: [Borgbackup] VM backup issues In-Reply-To: References: Message-ID: > 1) After looking at the some of the code, I realized that the > --read-special is not doing what I thought it would do. Well, it is special. :D The files-cache related stuff applies to regular files, not to block devices (or other stuff processed by --read-special). > 2) When I create a backup my VM, I create a temp directory, symlink the > VM block device as well as some regular files. I also symlink any file > based images related to the VM (ie; .img or .iso). I am finding that the > other non block files are only backing up the symlinks. How do I get it > to backup the file, not the symlink? Use a hardlink? > I am having to resort to bind mounting non-block device files because of > this. Is this the only solution? If hardlinking does not work, yes. > 3) Because of the chunking issue, I took a different approach. I wrote a > wrapper that first does a borg list --json --last 1 to parse the archive > 'start' time. I am assuming this tells me the time of the most recent > backup. I then check, via os.stat, if any of the VM related images have > been modified since them (I am using mtime). You are using mtime of a regular (disk image) file or the LVM device file? > *I think I found a bug doing this*. I created my tmp directory and > os.chdir() into it. I was using 'borg create ... *' via a > subprocess.check_output(). I think due to safety checks, the '*' was > passed literately rather than globbing. If you don't use a shell, there won't be a shell that expands wildcards. And on UNIX, it is the job of the shell to expand commandline arguments, except if you quote them. > *: [Errno 2] No such file or directory: '*' Normal behaviour. It has trouble to open that "file", emits a warning and continues (and then it finds there is nothing more to do). > Number of files: 0 > terminating with warning status, rc 1 Hmm, there might be some check if a given path did not match anything at all and then sets warning status for the rc code. > It returned an rc of 1, did not backup any files, but it created an > entry as if the backup was complete. Why was this a problem? Because > above when I use the 'borg list --json --last 1' to see when the last > backup was done, it appears a backup was done when it really wasn't. > Make sense? I see your problem, but the solution is not borg deleting the archive it began if it ends up writing nothing to it, but rather that you check if the borg command returns with rc != 0 and fix whatever the issue was. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From kbass at kenbass.com Thu Nov 16 14:00:12 2017 From: kbass at kenbass.com (Ken Bass) Date: Thu, 16 Nov 2017 14:00:12 -0500 Subject: [Borgbackup] VM backup issues In-Reply-To: References: Message-ID: On 11/16/2017 12:51 PM, Thomas Waldmann wrote: > >> I am having to resort to bind mounting non-block device files because >> of this. Is this the only solution? > > If hardlinking does not work, yes. Hardlink is not reliable because it doesn't not work across file system boundaries. > >> 3) Because of the chunking issue, I took a different approach. I >> wrote a wrapper that first does a borg list --json --last 1 to parse >> the archive 'start' time. I am assuming this tells me the time of the >> most recent backup. I then check, via os.stat, if any of the VM >> related images have been modified since them (I am using mtime). > > You are using mtime of a regular (disk image) file or the LVM device > file? Both depending on what the underlying file is... os.stat following symlinks will indicate that the mtime of either. That appears to be what I want. > >> *I think I found a bug doing this*. I created my tmp directory and >> os.chdir() into it. I was using 'borg create ... *' via a >> subprocess.check_output(). I think due to safety checks, the '*' was >> passed literately rather than globbing. > > If you don't use a shell, there won't be a shell that expands wildcards. > And on UNIX, it is the job of the shell to expand commandline > arguments, except if you quote them. Correct - it was just a typo on my part, I should have used 'glob.glob('*')' but I thought the result was inconsistent - making an empty backup. In fact is seems downright dangerous to proceed with an empty backup. If someone were to do a prune afterwards they could potentially wipe out their last good backup depending on how many they are keeping. > Number of files: 0 >> terminating with warning status, rc 1 > > Hmm, there might be some check if a given path did not match anything > at all and then sets warning status for the rc code. I don't really know what it is supposed to do. I consulted the man page for borg and the entirety of the 'Return codes' section is: ?????? Borg can exit with the following return codes (rc): ?????? If you use --show-rc, the return code is also logged at the indicated level as the last log entry. There is no description of what any of the returns code are. I would have thought a non-zero meant a backup was not made/failed, but that was obviously not the case. Of course I have corrected my typo since, but still think it is dangerous to make an empty backup like that. I was thinking maybe it would have treated it as an aborted backup that didn't complete or something, but not a completed one. From tw at waldmann-edv.de Thu Nov 16 20:30:14 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 17 Nov 2017 02:30:14 +0100 Subject: [Borgbackup] VM backup issues In-Reply-To: References: Message-ID: <775593eb-8081-33f7-d232-de0ed5a1d82e@waldmann-edv.de> >> You are using mtime of a regular (disk image) file or the LVM device >> file? > Both depending on what the underlying file is... os.stat following > symlinks will indicate that the mtime of either. That appears to be what > I want. Hmm, is there a strict relationship between the mtime/ctime of the device file and the device contents? > Correct - it was just a typo on my part, I should have used > 'glob.glob('*')' but I thought the result was inconsistent - making an > empty backup. In fact is seems downright dangerous to proceed with an > empty backup. If someone were to do a prune afterwards they could > potentially wipe out their last good backup depending on how many they > are keeping. That has nothing to do with empty or not empty. The question is if the archive contains what you specified. Missing one important file or folder can be even worse than being empty because it is not that obvious. And borg indicates that there was something going wrong via rc 1 and a warning. >> Number of files: 0 >>> terminating with warning status, rc 1 >> >> Hmm, there might be some check if a given path did not match anything >> at all and then sets warning status for the rc code. > I don't really know what it is supposed to do. I consulted the man page > for borg and the entirety of the 'Return codes' section is: > > ?????? Borg can exit with the following return codes (rc): > > ?????? If you use --show-rc, the return code is also logged at the > indicated level as the last log entry. > There is no description of what any of the returns code are. http://borgbackup.readthedocs.io/en/stable/usage/general.html#return-codes There is a table that explains it. I also see the table in the man page. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From kbass at kenbass.com Thu Nov 16 22:20:57 2017 From: kbass at kenbass.com (Ken Bass) Date: Thu, 16 Nov 2017 22:20:57 -0500 Subject: [Borgbackup] VM backup issues In-Reply-To: <775593eb-8081-33f7-d232-de0ed5a1d82e@waldmann-edv.de> References: <775593eb-8081-33f7-d232-de0ed5a1d82e@waldmann-edv.de> Message-ID: <5f788987-6625-0b60-ad9f-d7e0157b4b18@kenbass.com> On 11/16/2017 08:30 PM, Thomas Waldmann wrote: >>> You are using mtime of a regular (disk image) file or the LVM device >>> file? >> Both depending on what the underlying file is... os.stat following >> symlinks will indicate that the mtime of either. That appears to be what >> I want. > Hmm, is there a strict relationship between the mtime/ctime of the > device file and the device contents? Sorry I am not sure of your question here. I know in my use case if a VM has never been powered on since the last backup, the mtime/ctime of the underlying LVM will not have been touched, so there is no need to chunk a 30G or 100G file just for the dedup to ignore it. In my case it is a waste of time / power. > >> Correct - it was just a typo on my part, I should have used >> 'glob.glob('*')' but I thought the result was inconsistent - making an >> empty backup. In fact is seems downright dangerous to proceed with an >> empty backup. If someone were to do a prune afterwards they could >> potentially wipe out their last good backup depending on how many they >> are keeping. > That has nothing to do with empty or not empty. > The question is if the archive contains what you specified. > > Missing one important file or folder can be even worse than being empty > because it is not that obvious. > > And borg indicates that there was something going wrong via rc 1 and a > warning. I understand what you are saying, but I? think specifying a set of files that are ALL errors should return an rc of 2 and not create an empty backup. Just my opinion about what is less error prone and more user friendly. My automated script did see that was a non zero error code because an exception was thrown. I just didn't expect an empty data set to be present from that run. >> There is no description of what any of the returns code are. > http://borgbackup.readthedocs.io/en/stable/usage/general.html#return-codes > > There is a table that explains it. > > I also see the table in the man page. > I am running the rpm's from EPEL on Centos. Those are the man pages missing that info. Thanks for the pointer to the documentation page--I missed that. (I was probably looking under the create command and missed the general/common usage area) From gait at ATComputing.nl Fri Nov 17 04:27:39 2017 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Fri, 17 Nov 2017 10:27:39 +0100 Subject: [Borgbackup] borg fuse fails :: UnicodeEncodeError: 'ascii' codec can't encode characters in position ... Message-ID: Hello, TL;DR: checking repository + archive OK, upgrade OK, archive info OK, FUSE fails. I use a wrapper ksh script to combine ZFS and borg. Checking the repository is OK: |zfsborg_check :: checking repository |Starting archive consistency check... |Analyzing archive data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 (1/1) |Archive consistency check complete, no problems found. "borg info ..." is OK too: |# ./zfsborg_info |zfsborg_info :: choose an archive for info | |1) data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 |#? 1 | |zfsborg_info :: getting info on data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 -- please wait! | |Synchronizing chunks cache... |Archives: 1, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 1. |Fetching and building archive index for data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 ... |Merging into master chunks index ... |Done. |Archive name: data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 |Archive fingerprint: a3f36d64a56c6a98256e0f66ef222d4b42d8743745c7eb05506e8c43848cefa1 |Comment: |Hostname: sanger |Username: sysman |Time (start): Mon, 2016-08-29 11:55:51 |Time (end): Tue, 2016-08-30 06:47:36 |Duration: 18 hours 51 minutes 44.97 seconds |Number of files: 9068680 |Command line: borg create --debug --compression=lz4 --progress --stats --verbose ::data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 /data/backup/.zfs/snapshot/borg-2016-08-29T11:55:51 |Utilization of maximum supported archive size: 58% |------------------------------------------------------------------------------ | Original size Compressed size Deduplicated size |This archive: 1.25 TB 783.70 GB 304.66 GB |All archives: 1.25 TB 783.70 GB 304.66 GB | | Unique chunks Total chunks |Chunk index: 3396800 9441380 But mounting the archive fails:192.84.30.70 |# ./zfsborg_fuse |zfsborg_fuse :: choose an archive to mount | |1) data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 |#? 1 | |zfsborg_fuse :: mounting data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51 -- please wait! | |Mounting filesystem |Local Exception |Traceback (most recent call last): | File "borg/archiver.py", line 4073, in main | File "borg/archiver.py", line 4001, in run | File "borg/archiver.py", line 1323, in do_mount | File "borg/archiver.py", line 148, in wrapper | File "borg/archiver.py", line 1333, in _do_mount | File "borg/fuse.py", line 286, in mount | File "borg/fuse.py", line 243, in _create_filesystem | File "borg/fuse.py", line 329, in process_archive | File "os.py", line 862, in fsencode |UnicodeEncodeError: 'ascii' codec can't encode characters in position 120-123: ordinal not in range(128) | |Platform: FreeBSD sanger 11.1-RELEASE-p1 FreeBSD 11.1-RELEASE-p1 #0: Wed Aug 9 11:55:48 UTC 2017 root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 amd64 |Borg: 1.1.2 Python: CPython 3.5.4 |PID: 59692 CWD: /usr/home/sysman |sys.argv: ['borg', 'mount', '--debug', '--verbose', '::data_backup at _data_backup_.zfs_snapshot_borg-2016-08-29T11:55:51', '/data/borg/fuse'] SSH_ORIGINAL_COMMAND: None From kbass at kenbass.com Sat Nov 18 09:48:20 2017 From: kbass at kenbass.com (Ken Bass) Date: Sat, 18 Nov 2017 09:48:20 -0500 Subject: [Borgbackup] VM backup issues In-Reply-To: <5f788987-6625-0b60-ad9f-d7e0157b4b18@kenbass.com> References: <775593eb-8081-33f7-d232-de0ed5a1d82e@waldmann-edv.de> <5f788987-6625-0b60-ad9f-d7e0157b4b18@kenbass.com> Message-ID: <0c7068b5-8d0c-1282-0b5b-6f591ab12334@kenbass.com> On 11/16/2017 10:20 PM, Ken Bass wrote: > On 11/16/2017 08:30 PM, Thomas Waldmann wrote: >>>> You are using mtime of a regular (disk image) file or the LVM device >>>> file? >>> Both depending on what the underlying file is... os.stat following >>> symlinks will indicate that the mtime of either. That appears to be >>> what >>> I want. >> Hmm, is there a strict relationship between the mtime/ctime of the >> device file and the device contents? > Sorry I am not sure of your question here. I know in my use case if a > VM has never been powered on since the last backup, the mtime/ctime of > the underlying LVM will not have been touched, so there is no need to > chunk a 30G or 100G file just for the dedup to ignore it. In my case > it is a waste of time / power. Working on this some more, now I understand your question. There may not be relationship between the mtime/ctime of the block device file and the block device contents. I think the mtime/ctime gets set to when the host server created the LVM block device or when a snapshot against the LVM was last taken. So the timestamp might represent when the host server was last rebooted - not related to an update of the device contents. I might need to use the libvirt 'hooks' to maintain a separate timestamp file of when the VM was last powered on. From abhishek.garg at wingify.com Mon Nov 20 06:47:41 2017 From: abhishek.garg at wingify.com (Abhishek Garg) Date: Mon, 20 Nov 2017 17:17:41 +0530 Subject: [Borgbackup] Borg stale on client Message-ID: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> I was setting up borg on a client to connect server, but it's stale and nothing going on server. If I'm forcefully killing thread on Client it's re-spawning itself. Screen I'm seeing: "Mon Nov 20 07:23:33 UTC 2017 Starting backup", nothing after that. Current time: Mon Nov 20 11:44:12 UTC 2017 root@: # lsb_release -a No LSB modules are available. Distributor ID:??? Debian Description:??? Debian GNU/Linux 7.4 (wheezy) Release:??? 7.4 Codename:??? wheezy Thanks -- Abhishek Garg From tw at waldmann-edv.de Mon Nov 20 08:07:42 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 20 Nov 2017 14:07:42 +0100 Subject: [Borgbackup] Borg stale on client In-Reply-To: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> References: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> Message-ID: <0e927968-a5e6-af94-8fd5-1f504b98c7c4@waldmann-edv.de> On 11/20/2017 12:47 PM, Abhishek Garg wrote: > I was setting up borg on a client to connect server, but it's stale and > nothing going on server. Server activity depends on repo state and client input data. E.g. if the client reads a lot of data it already knows it has already in the repo, it won't talk to the server. Also, repo might be locked, then the borg client will wait for --lock-wait time whether the lock goes away. > If I'm forcefully killing thread on Client it's > re-spawning itself. There is no re-spawn logic inside borg. > Screen I'm seeing: "Mon Nov 20 07:23:33 UTC 2017 Starting backup", > nothing after that. That is not an output from borg, but from your script. What's your borg version and borg commandline? You could add -v --list or --progress to get more info about what it is doing. Or even --debug. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From abhishek.garg at wingify.com Tue Nov 21 03:14:07 2017 From: abhishek.garg at wingify.com (Abhishek Garg) Date: Tue, 21 Nov 2017 13:44:07 +0530 Subject: [Borgbackup] Borg stale on client In-Reply-To: <0e927968-a5e6-af94-8fd5-1f504b98c7c4@waldmann-edv.de> References: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> <0e927968-a5e6-af94-8fd5-1f504b98c7c4@waldmann-edv.de> Message-ID: Thanks Thomas. As you suggested for repo locked, I tried with creating new repo but still same thing happening from client. For re-spawning part, I had killed bash script by passing forcefully but still some of threads running. This is opening 2 thread for each time I'm killing process manually. /root???? 10323???? 1? 0 Nov20 pts/5??? 00:00:00 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup// //root???? 10324 10323? 0 Nov20 pts/5??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup// //root???? 25877???? 1? 0 Nov20 pts/6??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --compression lz4 --remote-ratelimit 51200 ::{hostname}-{now} /root/backup// //root???? 28996???? 1? 0 Nov20 pts/5??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --compression lz4 --remote-ratelimit 51200 ::{hostname}-{now} /root/backup// //root???? 31567 27594? 0 08:00 pts/5??? 00:00:00 bash borg.sh// //root???? 31569 31567? 0 08:00 pts/5??? 00:00:00 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup// //root???? 31573 31569? 0 08:00 pts/5??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup/ borg version --> borg 1.1.1 Debug Output: /Tue Nov 21 08:09:57 UTC 2017 Starting backup // // //using builtin fallback logging configuration// //35 self tests completed in 0.20 seconds// //SSH command line: ['ssh', '-i', '/var/lib/backup/.ssh/id_rsa', 'user at ip', 'borg', 'serve', '--umask=077', '--debug']// //Remote: using builtin fallback logging configuration// //Remote: 35 self tests completed in 0.14 seconds// //Remote: using builtin fallback logging configuration// //Remote: Initialized logging system for JSON-based protocol// //Remote: Resolving repository path b'/disk1/borg'// //Remote: Resolved repository path to '/disk1/borg'// //Remote: Verified integrity of /disk1/borg/index.1// //TAM-verified manifest// //security: read previous location 'ssh://user at ip/disk1/borg'// //security: read manifest timestamp '2017-11-21T07:55:25.603239'// //security: determined newest manifest timestamp as 2017-11-21T07:55:25.603239// //security: repository checks ok, allowing access/ Nothing happening after that. On 20/11/17 6:37 PM, Thomas Waldmann wrote: > On 11/20/2017 12:47 PM, Abhishek Garg wrote: >> I was setting up borg on a client to connect server, but it's stale and >> nothing going on server. > Server activity depends on repo state and client input data. > > E.g. if the client reads a lot of data it already knows it has already > in the repo, it won't talk to the server. > > Also, repo might be locked, then the borg client will wait for > --lock-wait time whether the lock goes away. > >> If I'm forcefully killing thread on Client it's >> re-spawning itself. > There is no re-spawn logic inside borg. > >> Screen I'm seeing: "Mon Nov 20 07:23:33 UTC 2017 Starting backup", >> nothing after that. > That is not an output from borg, but from your script. > > What's your borg version and borg commandline? > > You could add -v --list or --progress to get more info about what it is > doing. Or even --debug. > > Cheers, Thomas > -- Abhishek Garg Software Engineer, DevOps Wingify Software Pvt. Ltd. p: +91-9015346739| s: abhishek.garg_wingify -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Nov 21 07:30:30 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 21 Nov 2017 13:30:30 +0100 Subject: [Borgbackup] Borg stale on client In-Reply-To: References: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> <0e927968-a5e6-af94-8fd5-1f504b98c7c4@waldmann-edv.de> Message-ID: > /root???? 10323???? 1? 0 Nov20 pts/5??? 00:00:00 borg create --verbose > --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 > ::{hostname}-{now} /root/backup// Please remove the filter and the ratelimit until you get something basically working. Also there is a double trailing slash you should fix. > borg version --> borg 1.1.1 Use >= 1.1.2. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From abhishek.garg at wingify.com Tue Nov 21 11:08:38 2017 From: abhishek.garg at wingify.com (Abhishek Garg) Date: Tue, 21 Nov 2017 21:38:38 +0530 Subject: [Borgbackup] Borg stale on client In-Reply-To: References: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> <0e927968-a5e6-af94-8fd5-1f504b98c7c4@waldmann-edv.de> Message-ID: I tried with updated process, but same thing happening. I had also remove suggested params from script. Console output for ps command: root????? 2572???? 1? 0 08:09 pts/5??? 00:00:00 borg create --verbose --debug --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /opt/test root????? 2573? 2572? 0 08:09 pts/5??? 00:00:01 borg create --verbose --debug --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /opt/test root???? 10323???? 1? 0 Nov20 pts/5??? 00:00:00 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup root???? 10324 10323? 0 Nov20 pts/5??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup root???? 25652 25640? 0 16:03 pts/9??? 00:00:00 grep borg root???? 25877???? 1? 0 Nov20 pts/6??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --compression lz4 --remote-ratelimit 51200 ::{hostname}-{now} /root/backup root???? 28996???? 1? 0 Nov20 pts/5??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --compression lz4 --remote-ratelimit 51200 ::{hostname}-{now} /root/backup root???? 29166???? 1? 0 12:53 pts/5??? 00:00:00 borg prune --list --prefix {hostname}- --show-rc --keep-daily 40 --keep-weekly 70 --keep-monthly 60 root???? 29167 29166? 0 12:53 pts/5??? 00:00:01 borg prune --list --prefix {hostname}- --show-rc --keep-daily 40 --keep-weekly 70 --keep-monthly 60 root???? 31569???? 1? 0 08:00 pts/5??? 00:00:00 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup root???? 31573 31569? 0 08:00 pts/5??? 00:00:01 borg create --verbose --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 ::{hostname}-{now} /root/backup root???? 32135???? 1? 0 13:02 ???????? 00:00:01 borg prune --list --prefix {hostname}- --show-rc --keep-daily 40 --keep-weekly 70 --keep-monthly 60 I'm not able to kill these process with -9 Signal. These are keeping restart itself. On 21/11/17 6:00 PM, Thomas Waldmann wrote: >> /root???? 10323???? 1? 0 Nov20 pts/5??? 00:00:00 borg create --verbose >> --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 >> ::{hostname}-{now} /root/backup// > Please remove the filter and the ratelimit until you get something > basically working. > > Also there is a double trailing slash you should fix. > >> borg version --> borg 1.1.1 > Use >= 1.1.2. > -- Abhishek Garg From will at thearete.co.uk Tue Nov 21 11:10:24 2017 From: will at thearete.co.uk (William Furnass) Date: Tue, 21 Nov 2017 16:10:24 +0000 Subject: [Borgbackup] Borg stale on client In-Reply-To: References: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> <0e927968-a5e6-af94-8fd5-1f504b98c7c4@waldmann-edv.de> Message-ID: Abhishek, Are you still able to list the archives in your repo? I ask as 'borg create ...' _and_ 'borg list -v --debug $BORG_REPO' hang for me with no output after 'borg list -v --debug' if I start them after running 'sudo -s'. However, listing archives works fine if I run 'borg list' without elevated privileges. Cheers, Will On 21 November 2017 at 12:30, Thomas Waldmann wrote: >> /root 10323 1 0 Nov20 pts/5 00:00:00 borg create --verbose >> --filter AME --list --stats --show-rc --noatime --remote-ratelimit 51200 >> ::{hostname}-{now} /root/backup// > > Please remove the filter and the ratelimit until you get something > basically working. > > Also there is a double trailing slash you should fix. > >> borg version --> borg 1.1.1 > > Use >= 1.1.2. > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From abhishek.garg at wingify.com Wed Nov 22 03:26:56 2017 From: abhishek.garg at wingify.com (Abhishek Garg) Date: Wed, 22 Nov 2017 13:56:56 +0530 Subject: [Borgbackup] Borg stale on client In-Reply-To: References: <0fbcec8f-61a1-1da0-4ece-40287d9b6dce@wingify.com> <0e927968-a5e6-af94-8fd5-1f504b98c7c4@waldmann-edv.de> Message-ID: I verified with another server, It's working from there. Seems like this issue is from a specific machine. Machine Info (where I'm facing issue): lsb_release -a No LSB modules are available. Distributor ID:??? Debian Description:??? Debian GNU/Linux 7.4 (wheezy) Release:??? 7.4 Codename:??? wheezy uname -r 3.2.0-4-amd64 On 21/11/17 9:47 PM, Abhishek Garg wrote: > Will, This repo is fresh and nothing is stored yet. List in repo is > working however, it's returning empty value. > > I'm syncing my backup directory on remote server with SSH, SSH > connection is verified and working fine. > > > On 21/11/17 9:40 PM, William Furnass wrote: >> Abhishek, >> >> Are you still able to list the archives in your repo?? I ask as 'borg >> create ...' _and_ 'borg list -v --debug $BORG_REPO' hang for me with >> no output after 'borg list -v --debug' if I start them after running >> 'sudo -s'.? However, listing archives works fine if I run 'borg list' >> without elevated privileges. >> >> Cheers, >> >> Will >> >> On 21 November 2017 at 12:30, Thomas Waldmann >> wrote: >>>> /root???? 10323???? 1? 0 Nov20 pts/5??? 00:00:00 borg create --verbose >>>> --filter AME --list --stats --show-rc --noatime --remote-ratelimit >>>> 51200 >>>> ::{hostname}-{now} /root/backup// >>> Please remove the filter and the ratelimit until you get something >>> basically working. >>> >>> Also there is a double trailing slash you should fix. >>> >>>> borg version --> borg 1.1.1 >>> Use >= 1.1.2. >>> >>> -- >>> >>> GPG ID: 9F88FB52FAF7B393 >>> GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 >>> >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > -- Abhishek Garg Software Engineer, DevOps Wingify Software Pvt. Ltd. p: +91-9015346739| s: abhishek.garg_wingify From gait at ATComputing.nl Wed Nov 22 07:44:40 2017 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 22 Nov 2017 13:44:40 +0100 Subject: [Borgbackup] Getting non-recursive listing of a directory with 'borg list' fails Message-ID: <254c3b84-2a2a-5c69-999e-87a445f4aaf4@ATComputing.nl> Hello, I am trying to get a listing of a directory on a certain level only, from an archive, but I get nothing. borg list \ '::data_backup at _data_backup_.zfs_snapshot_borg-2017-11-21T15:44:21' \ 'data/backup/.zfs/snapshot/borg-2017-11-21T15:44:21/*' Without the '/*' I get a listing of the complete contents of the directory in the archive in the $BORG_REPO, but I only want the names at that particular level. As documented * shall not include path separators. Why is that? Gerrit From tw at waldmann-edv.de Wed Nov 22 08:07:04 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 22 Nov 2017 14:07:04 +0100 Subject: [Borgbackup] Getting non-recursive listing of a directory with 'borg list' fails In-Reply-To: <254c3b84-2a2a-5c69-999e-87a445f4aaf4@ATComputing.nl> References: <254c3b84-2a2a-5c69-999e-87a445f4aaf4@ATComputing.nl> Message-ID: > I am trying to get a listing of a directory on a certain level only, > from an archive, > but I get nothing. > > > borg list \ > '::data_backup at _data_backup_.zfs_snapshot_borg-2017-11-21T15:44:21' \ > ?????????????? 'data/backup/.zfs/snapshot/borg-2017-11-21T15:44:21/*' > > Without the '/*' I get a listing of the complete contents of the > directory in the archive in the $BORG_REPO, > but I only want the names at that particular level. > > As documented * shall not include path separators. Guess it is because the positional PATHS arguments should be "roots" where the recursion starts, not patterns. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From gait at ATComputing.nl Wed Nov 22 08:22:10 2017 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 22 Nov 2017 14:22:10 +0100 Subject: [Borgbackup] Getting non-recursive listing of a directory with 'borg list' fails In-Reply-To: References: <254c3b84-2a2a-5c69-999e-87a445f4aaf4@ATComputing.nl> Message-ID: <7dfe340b-5c4d-aee1-1aee-599b46e8c00b@ATComputing.nl> Op 22-11-17 om 14:07 schreef Thomas Waldmann: > Guess it is because the positional PATHS arguments should be "roots" > where the recursion starts, not patterns. OK, but that would mean there is no way to get a 'top' level listing only? Or, in other words: the is no way to prevent the recursion? Gerrit From tw at waldmann-edv.de Mon Nov 27 00:24:56 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 27 Nov 2017 06:24:56 +0100 Subject: [Borgbackup] borgbackup 1.1.3 released! Message-ID: <715fd238-41c5-1d86-47ef-d087831186ec@waldmann-edv.de> Released borgbackup 1.1.3 with security and bug fixes. Also a nice improvement for borg mount and some other small features. https://github.com/borgbackup/borg/releases/tag/1.1.3 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Mon Nov 27 03:20:13 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Mon, 27 Nov 2017 09:20:13 +0100 Subject: [Borgbackup] borgbackup 1.1.3 released! In-Reply-To: <5878932f-115d-8e20-4728-ba4cf6f9933a@oss.schwarz.eu> References: <715fd238-41c5-1d86-47ef-d087831186ec@waldmann-edv.de> <5878932f-115d-8e20-4728-ba4cf6f9933a@oss.schwarz.eu> Message-ID: <7caa4eb2-87fe-0474-ef22-5d3fc38f8da8@oss.schwarz.eu> Am 27.11.2017 um 09:17 schrieb Felix Schwarz: > Am 27.11.2017 um 06:24 schrieb Thomas Waldmann: >> Released borgbackup 1.1.3 with security and bug fixes. > > Did you request a CVE number already? Not enough coffee this morning: The changelog actually mentions CVE-2017-15914. From felix.schwarz at oss.schwarz.eu Mon Nov 27 03:17:32 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Mon, 27 Nov 2017 09:17:32 +0100 Subject: [Borgbackup] borgbackup 1.1.3 released! In-Reply-To: <715fd238-41c5-1d86-47ef-d087831186ec@waldmann-edv.de> References: <715fd238-41c5-1d86-47ef-d087831186ec@waldmann-edv.de> Message-ID: <5878932f-115d-8e20-4728-ba4cf6f9933a@oss.schwarz.eu> Am 27.11.2017 um 06:24 schrieb Thomas Waldmann: > Released borgbackup 1.1.3 with security and bug fixes. Did you request a CVE number already? Also I'd like to understand the impact of the security fix. So it seems a malicious borg client with SSH access to the server could read arbitrary borg repos (but not arbitrary files, right?). If these repos are encrypted an attacker can get just encrypted blocks, right? Is it possible for an attacker to delete or damage another (encrypted) repo? thank you very much, Felix From tw at waldmann-edv.de Mon Nov 27 08:21:46 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 27 Nov 2017 14:21:46 +0100 Subject: [Borgbackup] 1.1.x security issue clarifications In-Reply-To: <5878932f-115d-8e20-4728-ba4cf6f9933a@oss.schwarz.eu> References: <715fd238-41c5-1d86-47ef-d087831186ec@waldmann-edv.de> <5878932f-115d-8e20-4728-ba4cf6f9933a@oss.schwarz.eu> Message-ID: <4409ad45-ec5c-da4c-269e-628d31d81873@waldmann-edv.de> On 11/27/2017 09:17 AM, Felix Schwarz wrote: > > Am 27.11.2017 um 06:24 schrieb Thomas Waldmann: >> Released borgbackup 1.1.3 with security and bug fixes. > > Also I'd like to understand the impact of the security fix. So it seems a > malicious borg client with SSH access to the server As one usually uses ssh keys (or maybe password login for interactive setups), the authentication done by ssh limits the scope of the attackers to your *allowed* and *authenticated* users. This makes this issue a rather low severity one - you could know who is attacking. > could read arbitrary borg repos The security issue was only present in the (new in 1.1) borg serve option "--restrict-to-repository=...". This is why 1.0.x is not affected. If you use "--restrict-to-path=..." (which is present since longer), you're not affected. This also limits the scope of this vulnerability. People who still use 1.0.x or who just upgraded to 1.1.x, but did not change their borg serve restrictions setup (and still use the 1.0 --restrict-to-path mechanism, e.g. in .ssh/authorized_keys) are not affected. > (but not arbitrary files, right?). Correct. > If these repos are encrypted an > attacker can get just encrypted blocks, right? > Is it possible for an attacker to delete or damage another (encrypted) repo? Yes, stuff would be encrypted (assuming encryption is used and different keys are used, as usual). But guess he could delete other's repos (assuming filesystem permissions allow it, like for a shared account that only uses borg features for separation). Also, low level ops maybe could do damage (stuff that does not ask for encryption password). Or malicious client code only doing repo ops. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From qzwx2007 at gmail.com Thu Nov 30 03:24:54 2017 From: qzwx2007 at gmail.com (JK) Date: Thu, 30 Nov 2017 10:24:54 +0200 Subject: [Borgbackup] How to restore repo to another directory? Message-ID: We have several borg repos which I need to be periodically backup on to USB disks / cloud / remote sites. ? Borg with-lock seems to make it possible to take coherent snapshot from a repo e.g. using tar (something like: borg with-lock repo tar -czvf tarfile repo). As a part of backup / restore operations I also occasionally need to check that restore works and restored data is really usefull. How to do that for a backed up borg repo? It is not an option to restore backup repo over the original repo so is it possible to restore backup repo into another dir on the same machine (A) it was originally backed up? After that I need to somehow check that restored repo is functional, e.g. by running borg check or maybe extract archive from original repo and restored repo and diff them. I have not tested this but I guess restoring repo on the same machine will cause problems with identical repo ids and/or cache files. Is it somehow possible to change the repo id of the restored repo? Do I need to do something for the client caches? Repos are not encrypted. If this is not possible then maybe restore backup repo into another machine (B), extract archive from it and from original repo on machine (A) and compare them. Do I run into id / cache problems if both extract are done with a same client? Third and maybe simplest way is to take a checksum of the snapshot repo tar file, store it together with the backup and during restore test just restore the snapshot repo tar file + original checksum into target machine, recalc the checksum and compare those and assume that if the checksums match then the restored repo is (hopefully) functional. Are there any other ways to check the restored repo functionality? From felix.schwarz at oss.schwarz.eu Thu Nov 30 03:44:05 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Thu, 30 Nov 2017 09:44:05 +0100 Subject: [Borgbackup] 1.1.x security issue clarifications In-Reply-To: <4409ad45-ec5c-da4c-269e-628d31d81873@waldmann-edv.de> References: <715fd238-41c5-1d86-47ef-d087831186ec@waldmann-edv.de> <5878932f-115d-8e20-4728-ba4cf6f9933a@oss.schwarz.eu> <4409ad45-ec5c-da4c-269e-628d31d81873@waldmann-edv.de> Message-ID: <236ec60c-81f6-f48e-fa99-560236156857@oss.schwarz.eu> Hey Thomas, thank you for the in-depth explanation. Felix From keepitsimpleengineer at gmail.com Fri Dec 1 19:26:53 2017 From: keepitsimpleengineer at gmail.com (Larry Johnson) Date: Fri, 1 Dec 2017 16:26:53 -0800 Subject: [Borgbackup] borg fails (I/O Error, Errno 5) to backup second Windows system to USB hard drive Message-ID: So I have two Windows 10 systems on separate disks/partitions. When I use borg to backup System a (SDD) to a USB external HDD it works fine. When I use borg to backup System b (HDD) to the USB external HDD, I get hundreds of errors before I cancel the job, similar to these: /run/media/ljohnson/WIN_10b/Program Files/WindowsApps/Microsoft.Office.OneNote_17.8730.20741.0_x64__8wekyb3d8bbwe/en-gb/locimages/offsym.ttf: stat: [Errno 5] Input/output error: '/run/media/ljohnson/WIN_10b/Program Files/WindowsApps/Microsoft.Office.OneNote_17.8730.20741.0_x64__8wekyb3d8bbwe/en-gb/locimages/offsym.ttf' More output at: https://pastebin.com/PvUd73DK The disks look like this: $ listblk # (alias listblk='lsblk -o NAME,SIZE,UUID,OWNER,GROUP,LABEL,MOUNTPOINT') NAME SIZE UUID OWNER GROUP LABEL MOUNTPOINT sda 465.8G root disk ??sda1 19.5G d7c05292-01fc-a6ac-9961-ef349991e2ac root disk arch_boot /boot ??sda2 62.5G 5a1ca848-b27c-966c-8bb5-fb498991521a root disk arch_root / ??sda3 16G cdb9e719-9cd1-47ed-911b-c6319b3c67a2 root disk ??sda4 1K root disk ??sda5 29.3G 11251398-24ec-a7ed-7b81-29f22eb17943 root disk arch_adj ??sda6 338.4G 6ff811c8-7f8d-9d83-14ac-059816ad4367 root disk arch_home /home sdb 489.1G root disk ??sdb1 500M E24CE6894CE65835 root disk System Reserved ??sdb2 488.6G F2F6E76FF6E73311 root disk sdd 931.5G root disk ??sdd1 931.5G 406C7D256C7D173E root disk Win10aBU /run/media/ljohnson/Win10aBU sde 931.5G root disk ??sde1 500M F2B64D2CB64CF321 root disk System Reserved ??sde2 931G 10D04EC2D04EAE32 root disk /run/media/ljohnson/WIN_10b System a is on /dev/sdb and System b is on /dev/sde The USB drive is /dev/sdd $ ls -l /run/media/ljohnson/Win10aBU | grep borg drwxrwxrwx 1 ljohnson users 4096 Nov 30 17:25 borgBU drwxrwxrwx 1 ljohnson users 552 Nov 30 18:28 borgBUb Backups of System a is on borgBU and System b was to have been on borgBUb Running on archlinux x86_64 using fuse 3.2.0-1 and borg 1.1.2-2 The run command was: $ borg create --stats --compression zlib,5 /run/media/ljohnson/Win10aBU/borgBUb::all /run/media/ljohnson/WIN_10b/ The ntfs partitions have adequate size and check out w/o errors? $ df -h ? Filesystem Size Used Avail Use% Mounted on /dev/sde2 932G 240G 692G 26% /run/media/ljohnson/WIN_10b /dev/sdd1 932G 216G 717G 24% /run/media/ljohnson/Win10aBU No problems accessing System b using file manager from archlinux . I am very much at a loss to figure out what's not working, especially since System a went so well... .. . -------------- next part -------------- An HTML attachment was scrubbed... URL: From keepitsimpleengineer at gmail.com Sun Dec 3 12:40:40 2017 From: keepitsimpleengineer at gmail.com (Larry Johnson) Date: Sun, 3 Dec 2017 09:40:40 -0800 Subject: [Borgbackup] Fwd: borg fails (I/O Error, Errno 5) to backup second Windows system to USB hard drive In-Reply-To: References: Message-ID: Solved by running: C:\WINDOWS\system32> compact.exe /CompactOS:never from elevated cmd on System b -and- installing ntfs-3g-system-compression (https://github.com/ebiggers/ntfs-3g-system-compression) However journalctl still shows: Dec 03 08:30:40 KISE-005 ntfs-3g[1110]: Could not load plugin /usr/lib/ntfs-3g/ntfs-plugin-80000018.so: No such file or directory Dec 03 08:30:40 KISE-005 ntfs-3g[1110]: Hint /usr/lib/ntfs-3g/ntfs-plugin-80000018.so: cannot open shared object file: No such file or directory ---------- Forwarded message ---------- From: Larry Johnson Date: Fri, Dec 1, 2017 at 4:26 PM Subject: borg fails (I/O Error, Errno 5) to backup second Windows system to USB hard drive To: borgbackup at python.org So I have two Windows 10 systems on separate disks/partitions. When I use borg to backup System a (SDD) to a USB external HDD it works fine. When I use borg to backup System b (HDD) to the USB external HDD, I get hundreds of errors before I cancel the job, similar to these: /run/media/ljohnson/WIN_10b/Program Files/WindowsApps/Microsoft.Office.OneNote_17.8730.20741.0_x64__8wekyb3d8bbwe/en-gb/locimages/offsym.ttf: stat: [Errno 5] Input/output error: '/run/media/ljohnson/WIN_10b/Program Files/WindowsApps/Microsoft.Office.OneNote_17.8730.20741.0_x64__8wekyb3d8bbwe/en-gb/locimages/offsym.ttf' More output at: https://pastebin.com/PvUd73DK The disks look like this: $ listblk # (alias listblk='lsblk -o NAME,SIZE,UUID,OWNER,GROUP,LABEL,MOUNTPOINT') NAME SIZE UUID OWNER GROUP LABEL MOUNTPOINT sda 465.8G root disk ??sda1 19.5G d7c05292-01fc-a6ac-9961-ef349991e2ac root disk arch_boot /boot ??sda2 62.5G 5a1ca848-b27c-966c-8bb5-fb498991521a root disk arch_root / ??sda3 16G cdb9e719-9cd1-47ed-911b-c6319b3c67a2 root disk ??sda4 1K root disk ??sda5 29.3G 11251398-24ec-a7ed-7b81-29f22eb17943 root disk arch_adj ??sda6 338.4G 6ff811c8-7f8d-9d83-14ac-059816ad4367 root disk arch_home /home sdb 489.1G root disk ??sdb1 500M E24CE6894CE65835 root disk System Reserved ??sdb2 488.6G F2F6E76FF6E73311 root disk sdd 931.5G root disk ??sdd1 931.5G 406C7D256C7D173E root disk Win10aBU /run/media/ljohnson/Win10aBU sde 931.5G root disk ??sde1 500M F2B64D2CB64CF321 root disk System Reserved ??sde2 931G 10D04EC2D04EAE32 root disk /run/media/ljohnson/WIN_10b System a is on /dev/sdb and System b is on /dev/sde The USB drive is /dev/sdd $ ls -l /run/media/ljohnson/Win10aBU | grep borg drwxrwxrwx 1 ljohnson users 4096 Nov 30 17:25 borgBU drwxrwxrwx 1 ljohnson users 552 Nov 30 18:28 borgBUb Backups of System a is on borgBU and System b was to have been on borgBUb Running on archlinux x86_64 using fuse 3.2.0-1 and borg 1.1.2-2 The run command was: $ borg create --stats --compression zlib,5 /run/media/ljohnson/Win10aBU/borgBUb::all /run/media/ljohnson/WIN_10b/ The ntfs partitions have adequate size and check out w/o errors? $ df -h ? Filesystem Size Used Avail Use% Mounted on /dev/sde2 932G 240G 692G 26% /run/media/ljohnson/WIN_10b /dev/sdd1 932G 216G 717G 24% /run/media/ljohnson/Win10aBU No problems accessing System b using file manager from archlinux From tw at waldmann-edv.de Sun Dec 3 19:27:22 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 4 Dec 2017 01:27:22 +0100 Subject: [Borgbackup] borg 1.0.11 trying to read files in /proc In-Reply-To: References: Message-ID: <0c2a7eff-0109-ba61-2f42-719df0f61386@waldmann-edv.de> On 11/05/2017 05:22 PM, Tom Schutter wrote: > On one machine that I backup, borg is processing files in /proc which is > causing it to run for an extremely long time, if not forever. That's a known issue (not limited to borg) when trying to read all stuff from /proc. > # borg create\ > ??? borgbackup at pixel:takifugu::2017-11-05T00:06:13\ > ??? --exclude 'sh:/home/*/.adobe' --exclude 'sh:/home/*/.cache' > --exclude 'sh:/home/*/.thumbnails'\ > ??? --exclude /root/.cache\ > ??? --exclude /var/cache --exclude /var/lock --exclude /var/run > --exclude /var/tmp\ > ??? --compression lz4\ > ??? --stats --verbose\ > ??? /etc /home /opt /srv /root /usr/local /var That's really strange. You're sure that / or /proc was not in the "paths" arguments? Can you tried to reduce the command to the minimum that reproduces the problem? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sun Dec 3 19:30:17 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 4 Dec 2017 01:30:17 +0100 Subject: [Borgbackup] key export In-Reply-To: References: Message-ID: <2416a3b3-db1b-1f2f-2715-d770622cd1fd@waldmann-edv.de> On 11/02/2017 10:53 AM, Jasper Knockaert wrote: > Hello > > If I use key export to backup the repository encryption key, what is the > format of the output? It's the same format as used in a borg keyfile (if you use keyfile not repokey encryption mode). > Is the exported key still encrypted or not? It is encrypted, you need to also remember your passphrase. We recently clarified the docs about that. > put it differently: in the case the exported key gets compromised, would > one need the repository password the decrypt the archives or not? You'ld need the passphrase to decrypt the (exported) key. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sun Dec 3 19:33:26 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 4 Dec 2017 01:33:26 +0100 Subject: [Borgbackup] Saving logging output to a log file In-Reply-To: References: Message-ID: On 11/01/2017 04:30 PM, Dave Cottingham wrote: > I wanted borg to append the logging output to a log file, and I have > succeeded in doing that, but my solution is so clunky I'm hoping someone > can point me to a better solution. > > There doesn't seem to be any direct way to specify a log file to borg, That's because usually one uses I/O redirection by the shell for that. borg ... >stdout.log 2>stderr.log or borg ... >both.log 2>&1 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mateusz.kijowski at gmail.com Mon Dec 4 07:39:12 2017 From: mateusz.kijowski at gmail.com (Mateusz Kijowski) Date: Mon, 4 Dec 2017 13:39:12 +0100 Subject: [Borgbackup] Deduplication not efficient on single file VM images Message-ID: Hi, I am in the process of migrating my backups from zbackup[1] to borg for my VM machine images, and the deduplication is not behaving as I expected it to. I am running borgbackup 1.0.9 from jessie-backports with lzma compression, default chunker settings and repokey encryption (password provided by environment variable). The backup image files are created by another tool (so these are proper backups, not live disk images) and I am piping them into borg stdin in my wrapper script. I also set timestamp on borg create so that I can prune the backups nicely. I have separate borg repositories per VM, so that I can load them in parallel thus making it fit in my backup window. Both the source files and the repositories are on a single machine (but on different storage). Also, from my experiments it doesn't seem that IOPS are a problem. The biggest problem right now is that Borg seems to fail to deduplicate most of the data: # du -sh {zbackup,borg}/vm-100 1,9G zbackup/vm-100 8,0G borg/vm-100 Another, similar machine repo with a single archive in it shows that the baseline is fine: # du -sh {zbackup,borg}/vm-404 1,6G zbackup/vm-404 1,6G borg/vm-404 Borg stats output for first, second and last borg create for vm-100: ------------------------------------------------------------------------------ Archive name: vzdump-qemu-100-2017_11_20-15_52_32.vma Archive fingerprint: d73fcf2fc30807338336b3dbcfe831f7ee1a853a50b086071b4efeb2004d7dad Time (start): Mon, 2017-11-20 13:36:45 Time (end): Mon, 2017-11-20 15:52:32 Duration: 2 hours 15 minutes 46.54 seconds Number of files: 1 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 14.49 GB 1.99 GB 1.96 GB All archives: 14.49 GB 1.99 GB 1.96 GB Unique chunks Total chunks Chunk index: 4975 5510 ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Archive name: vzdump-qemu-100-2017_11_29-01_00_02.vma Archive fingerprint: 09f7303382e8669e05c030033dcd9c824da004b5e6ac93f7ebfb55589b17bff1 Time (start): Tue, 2017-11-28 22:59:18 Time (end): Wed, 2017-11-29 01:00:02 Duration: 2 hours 43.32 seconds Number of files: 1 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 14.50 GB 1.99 GB 1.72 GB All archives: 28.99 GB 3.98 GB 3.67 GB Unique chunks Total chunks Chunk index: 8697 11023 ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ Archive name: vzdump-qemu-100-2017_12_02-02_34_28.vma Archive fingerprint: 828ab7dac873ff441f18864c16858d40c9eb34a0e26985c9b7e95508358c9d18 Time (start): Sat, 2017-12-02 00:09:49 Time (end): Sat, 2017-12-02 02:34:28 Duration: 2 hours 24 minutes 38.86 seconds Number of files: 1 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 14.51 GB 1.99 GB 1.63 GB All archives: 72.51 GB 9.95 GB 8.51 GB Unique chunks Total chunks Chunk index: 19058 27600 ------------------------------------------------------------------------------ The machine itself is a simple shorewall based router and the image doesn't change much. The only content that is changing are the logs, so I am truly amazed why the deduplication performs so weakly. I guess I could run zerofill on the VM images, but on the other hand zbackup somehow managed to deduplicate most of the stuff, so I wouldn't think that this is the issue. Is there something I am missing from the documentation regarding tuning for my use-case? Since I have a bunch of existing backups I am currently converting them from zbackup to borg, using parallel "zbackup restore ... | borg create ... -" pipelines. Perhaps there is a problem with multiple processes using the same cache dir? Should the cache dir be seaparate for different repos? Another problem is that the backup takes way longer (zbackup takes around 8 minutes to process the non-initial 14GB images, borg takes more than 2 hours every time). My assumption is that this difference is due to zbackup using multiple threads fot lzma compression. I also understand that I could use lz4 to have large processing time benefits at the cost of disk space. I think that I can live with that, provided that deduplication works as expected. Example borg init args: "init", "/mnt/zbackup/borg/vm-100" Example borg create args: "create", "--stats", "-v", "--timestamp", "2017-11-29T00:00:02", "--compression", "lzma", "/mnt/zbackup/borg/vm-100::vzdump-qemu-100-2017_11_29-01_00_02.vma", "-" I would appreciate any hints, let me know if you need more data. Mateusz [1] http://zbackup.org/ From kbass at kenbass.com Mon Dec 4 14:53:18 2017 From: kbass at kenbass.com (Ken Bass) Date: Mon, 4 Dec 2017 14:53:18 -0500 Subject: [Borgbackup] Deduplication not efficient on single file VM images In-Reply-To: References: Message-ID: <49bef72e-b0ea-470d-e3af-05dbb704cb7e@kenbass.com> On 12/04/2017 07:39 AM, Mateusz Kijowski wrote: > The backup image files are created by another tool (so these are > proper backups, not live disk images) and I am piping them into borg > stdin in my wrapper script. I also set timestamp on borg create so > that I can prune the backups nicely. > > I have separate borg repositories per VM, so that I can load them in > parallel thus making it fit in my backup window. Both the source files > and the repositories are on a single machine (but on different > storage). Also, from my experiments it doesn't seem that IOPS are a > problem. From your description, I don't see how dedup would be possible. Deduplication is within a single repo, so by using separate repositories you have prevented that. Also as far as compressibility / dedup you might want to ensure that unused disk space in your VM images is zeroed out rather than random data, but that assumes you are using a single repo. Unfortunately, I think your whole parallel concept is not going to work. From tw at waldmann-edv.de Mon Dec 4 15:32:51 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 4 Dec 2017 21:32:51 +0100 Subject: [Borgbackup] Deduplication not efficient on single file VM images In-Reply-To: References: Message-ID: <28a8cddf-013f-2c77-a108-74c5994e7763@waldmann-edv.de> > The backup image files are created by another tool (so these are > proper backups, not live disk images) and I am piping them into borg > stdin in my wrapper script. Likely because of the data format used. > The biggest problem right now is that Borg seems to fail to > deduplicate most of the data: > > # du -sh {zbackup,borg}/vm-100 > 1,9G zbackup/vm-100 > 8,0G borg/vm-100 I don't know zbackup details, but maybe you need finer granularity for borg's chunker? > Borg stats output for first, second and last borg create for vm-100: > ------------------------------------------------------------------------------ > Archive name: vzdump-qemu-100-2017_11_20-15_52_32.vma Ah, you use proxmox? So guess one needs to research that .vma format... https://git.proxmox.com/?p=pve-qemu.git;a=blob;f=vma_spec.txt It puts a UUID into the VMA extent headers. Looks like this is always a different UUID in each .vma file. So that spoils dedup for the chunks containing that UUID. extent = 59 clusters a 64kiB = ~ 3.8MB borg's default target block size is 2MiB - so borg's chunks will often contain that UUID (and thus not dedup with other .vma) and every 2nd chunk without an UUID inside might as well not match chunks from other .vma due different cutting places. So, you need to lower target chunk size significantly. You could check what zbackup uses or just try some target chunk sizes >= 64kiB. > The machine itself is a simple shorewall based router and the image > doesn't change much. The only content that is changing are the logs, > so I am truly amazed why the deduplication performs so weakly. You could try doing a snapshot manually and reading the raw image data (from the blockdevice or whatever) into borg. > I guess I could run zerofill on the VM images, but on the other hand > zbackup somehow managed to deduplicate most of the stuff, so I > wouldn't think that this is the issue. Yeah, looks like. > Is there something I am missing from the documentation regarding > tuning for my use-case? --chunker-params maybe. See also docs/misc/... But be aware the small chunks means also more chunks and more management overhead. > processes using the same cache dir? Should the cache dir be seaparate > for different repos? No, it creates a separate dir per repo under the borg cache dir anyway. > Another problem is that the backup takes way longer (zbackup takes > around 8 minutes to process the non-initial 14GB images, borg takes > more than 2 hours every time). That's likely the consequence of dedup not kicking in as much as expected. > My assumption is that this difference > is due to zbackup using multiple threads fot lzma compression. And that. But if you're in a hurry, just don't use lzma, but lz4. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Mon Dec 4 15:38:51 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 4 Dec 2017 21:38:51 +0100 Subject: [Borgbackup] Deduplication not efficient on single file VM images In-Reply-To: References: Message-ID: <4c1bc89c-b4ca-ef10-38e4-26161888c948@waldmann-edv.de> BTW, Mateusz, if you have time, maybe talk to the proxmox developers after you got borg working with reasonable deduplication. Would be great to see borg in proxmox some day. :) -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Mon Dec 4 15:45:38 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 4 Dec 2017 21:45:38 +0100 Subject: [Borgbackup] Deduplication not efficient on single file VM images In-Reply-To: <49bef72e-b0ea-470d-e3af-05dbb704cb7e@kenbass.com> References: <49bef72e-b0ea-470d-e3af-05dbb704cb7e@kenbass.com> Message-ID: > From your description, I don't see how dedup would be possible. There are somehow 3 "dimensions" of dedup: - historical dedup for all archive created from same machine (will work if done right) - inner dedup within same backup archive (will work if done right) - cross-machine dedup (will not work, as it requires putting multiple machines into same repo) -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mateusz.kijowski at gmail.com Thu Dec 7 11:16:47 2017 From: mateusz.kijowski at gmail.com (Mateusz Kijowski) Date: Thu, 7 Dec 2017 17:16:47 +0100 Subject: [Borgbackup] Deduplication not efficient on single file VM images In-Reply-To: <28a8cddf-013f-2c77-a108-74c5994e7763@waldmann-edv.de> References: <28a8cddf-013f-2c77-a108-74c5994e7763@waldmann-edv.de> Message-ID: 2017-12-04 21:32 GMT+01:00 Thomas Waldmann : >> The backup image files are created by another tool (so these are >> proper backups, not live disk images) and I am piping them into borg >> stdin in my wrapper script. > > Likely because of the data format used. Indeed it turned out so. > >> The biggest problem right now is that Borg seems to fail to >> deduplicate most of the data: >> >> # du -sh {zbackup,borg}/vm-100 >> 1,9G zbackup/vm-100 >> 8,0G borg/vm-100 > > I don't know zbackup details, but maybe you need finer granularity for > borg's chunker? Also correct, details follow :-) > >> Borg stats output for first, second and last borg create for vm-100: >> ------------------------------------------------------------------------------ >> Archive name: vzdump-qemu-100-2017_11_20-15_52_32.vma > > Ah, you use proxmox? So guess one needs to research that .vma format... > > https://git.proxmox.com/?p=pve-qemu.git;a=blob;f=vma_spec.txt > > It puts a UUID into the VMA extent headers. > Looks like this is always a different UUID in each .vma file. > So that spoils dedup for the chunks containing that UUID. Ouch, for some reason I thought that the VMAs do not have any internal structure and that they are just block device copies. Thanks for finding this as I would probably resist acknowledging that there is some backup-specific metadata sprayed all over the VM dump. I would rather assume that all metadata is at the beginning of the file and then the raw blockdevice contents. After reading [1] I also kinda understand why this format is cool and might have out-of-order data. > extent = 59 clusters a 64kiB = ~ 3.8MB > > borg's default target block size is 2MiB - so borg's chunks will often > contain that UUID (and thus not dedup with other .vma) and every 2nd > chunk without an UUID inside might as well not match chunks from other > .vma due different cutting places. > > So, you need to lower target chunk size significantly. You could check > what zbackup uses or just try some target chunk sizes >= 64kiB. AFAIU zbackup has a upper bound of 64k on chunk size [2] which fits this kind of data nicely. After doing some thought experiments it seems to me that for VMA the best chunk size would be 4k (because it seems that the image data itself is kept in 4k blocks within the 64 clusters), but this will produce way too many chunks (and the indices would be too large). > >> The machine itself is a simple shorewall based router and the image >> doesn't change much. The only content that is changing are the logs, >> so I am truly amazed why the deduplication performs so weakly. > > You could try doing a snapshot manually and reading the raw image data > (from the blockdevice or whatever) into borg. Yeah, but then I wouldn't backup the vm configuration. Restoring would be a lot more manual work I then. >> Is there something I am missing from the documentation regarding >> tuning for my use-case? > > --chunker-params maybe. See also docs/misc/... > > But be aware the small chunks means also more chunks and more management > overhead. Yeah so I did a couple of calculations and tests with chunk size from 16k up to 64k: -rw------- 1 root backup 30304298 Dec 6 01:24 borg_test_16k_10,18,14,4095/vm-100/index.468 -rw------- 1 root backup 17825978 Dec 6 11:25 borg_test_32k_10,18,15,4095/vm-100/index.458 -rw------- 1 root backup 17825978 Dec 6 15:21 borg_test_32k_12,20,15,4095/vm-100/index.455 -rw------- 1 root backup 10485898 Dec 6 15:10 borg_test_64k_10,18,16,4095/vm-100/index.478 -rw------- 1 root backup 10485898 Dec 6 00:44 borg_test_64k_12,20,16,4095/vm-100/index.477 1.9G zbackup/vm-100 2.3G borg_test_16k_10,18,14,4095/vm-100 2.2G borg_test_32k_10,18,15,4095/vm-100 2.2G borg_test_32k_12,20,15,4095/vm-100 2.3G borg_test_64k_10,18,16,4095/vm-100 2.3G borg_test_64k_12,20,16,4095/vm-100 -rw------- 1 root backup 17825978 Dec 6 02:48 borg_test_16k_10,18,14,4095/vm-101/index.357 -rw------- 1 root backup 10485898 Dec 6 12:30 borg_test_32k_10,18,15,4095/vm-101/index.352 -rw------- 1 root backup 10485898 Dec 6 16:35 borg_test_32k_12,20,15,4095/vm-101/index.355 -rw------- 1 root backup 5244058 Dec 6 16:19 borg_test_64k_10,18,16,4095/vm-101/index.378 -rw------- 1 root backup 5244058 Dec 6 01:54 borg_test_64k_12,20,16,4095/vm-101/index.381 1.5G zbackup/vm-101 1.7G borg_test_16k_10,18,14,4095/vm-101 1.7G borg_test_32k_10,18,15,4095/vm-101 1.7G borg_test_32k_12,20,15,4095/vm-101 1.8G borg_test_64k_10,18,16,4095/vm-101 1.8G borg_test_64k_12,20,16,4095/vm-101 -rw------- 1 root backup 171652778 Dec 6 23:48 borg_test_16k_10,18,14,4095/vm-200/index.5285 -rw------- 1 root backup 87578378 Dec 7 07:24 borg_test_32k_10,18,15,4095/vm-200/index.5535 -rw------- 1 root backup 87578378 Dec 7 12:01 borg_test_32k_12,20,15,4095/vm-200/index.5538 -rw------- 1 root backup 51516698 Dec 7 12:32 borg_test_64k_10,18,16,4095/vm-200/index.6146 -rw------- 1 root backup 51516698 Dec 6 19:33 borg_test_64k_12,20,16,4095/vm-200/index.6180 26G zbackup/vm-200 26G borg_test_16k_10,18,14,4095/vm-200 28G borg_test_32k_10,18,15,4095/vm-200 28G borg_test_32k_12,20,15,4095/vm-200 31G borg_test_64k_10,18,16,4095/vm-200 31G borg_test_64k_12,20,16,4095/vm-200 -rw------- 1 root backup 17825978 Dec 7 03:35 borg_test_16k_10,18,14,4095/vm-999/index.640 -rw------- 1 root backup 10485898 Dec 7 10:47 borg_test_32k_10,18,15,4095/vm-999/index.722 -rw------- 1 root backup 10485898 Dec 7 15:35 borg_test_32k_12,20,15,4095/vm-999/index.724 -rw------- 1 root backup 5244058 Dec 7 15:58 borg_test_64k_10,18,16,4095/vm-999/index.869 -rw------- 1 root backup 5244058 Dec 6 23:00 borg_test_64k_12,20,16,4095/vm-999/index.882 3.2G zbackup/vm-999 3.0G borg_test_16k_10,18,14,4095/vm-999 3.4G borg_test_32k_10,18,15,4095/vm-999 3.4G borg_test_32k_12,20,15,4095/vm-999 4.1G borg_test_64k_10,18,16,4095/vm-999 4.2G borg_test_64k_12,20,16,4095/vm-999 So it seems that borg is on par with zbackup dedupe/compression. Also the processing times are more reasonable (even with lzma enabled). I think that I will use 32k chunksize because the saved space seems to be at least 10% and my largest projected (original) repository size is somewhere around 7 TiB. Based on the index size (90MB) of vm-200 which has (original) size somewhere around 250 GB the index size for 7TiB repo should be around 2.5 GB. Assuming that the 2.1 factor for RAM usage from docs/misc/create_chunker-params.txt holds it should take a bit more than 5 GiB memory and I can live with that. Maybe I am doing some obvious miscalculation here, if you see it please let me know. One assumption that might be off-mark is that the index size depends on the original repo size and not the actual repo size. >> Another problem is that the backup takes way longer (zbackup takes >> around 8 minutes to process the non-initial 14GB images, borg takes >> more than 2 hours every time). > > That's likely the consequence of dedup not kicking in as much as expected. Yeah, now it went down to 14 mins or so, so again it's on par with zbackup. Thanks again for pointing out that the chunk size might be the problem. [1] https://git.proxmox.com/?p=pve-qemu.git;a=blob;f=backup.txt; [2] http://zbackup.org/#scalability From mateusz.kijowski at gmail.com Thu Dec 7 11:20:03 2017 From: mateusz.kijowski at gmail.com (Mateusz Kijowski) Date: Thu, 7 Dec 2017 17:20:03 +0100 Subject: [Borgbackup] Deduplication not efficient on single file VM images In-Reply-To: <4c1bc89c-b4ca-ef10-38e4-26161888c948@waldmann-edv.de> References: <4c1bc89c-b4ca-ef10-38e4-26161888c948@waldmann-edv.de> Message-ID: I guess I could do that Thomas, but the only thing that comes to mind I could do is letting them know of my chunk-size findings :-) I don't think I am able to prepare a pull request for their vzdump backup tool (although it's written in perl and I saw borg bindings on CPAN). 2017-12-04 21:38 GMT+01:00 Thomas Waldmann : > BTW, Mateusz, if you have time, maybe talk to the proxmox developers > after you got borg working with reasonable deduplication. > > Would be great to see borg in proxmox some day. :) > > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Sat Dec 16 14:28:33 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 16 Dec 2017 20:28:33 +0100 Subject: [Borgbackup] IMPORTANT: do not run borg 1.1.x check --repair Message-ID: <38e58af6-aa5d-e693-33cf-df1f290c3eff@waldmann-edv.de> A serious bug was found in borg 1.1.x code (1.0.x is NOT affected). So, don't run borg check --repair using 1.1.x until the fix is released / deployed. For details, see there: https://github.com/borgbackup/borg/issues/3444 From felix.schwarz at oss.schwarz.eu Sun Dec 17 06:07:55 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Sun, 17 Dec 2017 12:07:55 +0100 Subject: [Borgbackup] IMPORTANT: do not run borg 1.1.x check --repair In-Reply-To: <38e58af6-aa5d-e693-33cf-df1f290c3eff@waldmann-edv.de> References: <38e58af6-aa5d-e693-33cf-df1f290c3eff@waldmann-edv.de> Message-ID: <7a02faeb-9f77-a614-3b37-d8ddb5deed33@oss.schwarz.eu> Am 16.12.2017 um 20:28 schrieb Thomas Waldmann: > A serious bug was found in borg 1.1.x code (1.0.x is NOT affected). > > So, don't run borg check --repair using 1.1.x until the fix is released > / deployed. > > For details, see there: > > https://github.com/borgbackup/borg/issues/3444 What's the recommended action for distro maintainers? Can we just cherry-pick the commits from #3444 on top of 1.1.3 or should we wait for 1.1.4? As this is a potentially devastating issue we like to get some kind of fix to our users asap. Felix From tw at waldmann-edv.de Sun Dec 17 09:38:03 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 17 Dec 2017 15:38:03 +0100 Subject: [Borgbackup] IMPORTANT: do not run borg 1.1.x check --repair In-Reply-To: <7a02faeb-9f77-a614-3b37-d8ddb5deed33@oss.schwarz.eu> References: <38e58af6-aa5d-e693-33cf-df1f290c3eff@waldmann-edv.de> <7a02faeb-9f77-a614-3b37-d8ddb5deed33@oss.schwarz.eu> Message-ID: On 12/17/2017 12:07 PM, Felix Schwarz wrote: > > Am 16.12.2017 um 20:28 schrieb Thomas Waldmann: >> A serious bug was found in borg 1.1.x code (1.0.x is NOT affected). >> >> So, don't run borg check --repair using 1.1.x until the fix is released >> / deployed. >> >> For details, see there: >> >> https://github.com/borgbackup/borg/issues/3444 > > What's the recommended action for distro maintainers? Can we just cherry-pick > the commits from #3444 on top of 1.1.3 or should we wait for 1.1.4? If you have time, cherry-pick from #3444 (or even just add that b'path' change as seen in the one changeset, the other stuff is optional, see also the commit comments). https://github.com/borgbackup/borg/pull/3445 https://github.com/borgbackup/borg/pull/3445/commits/e09892caec8a63d59e909518c4e9c230dbd69774 1.1.4 will come soon (hopefully in december), but has some bigger changes that might need some more work from maintainers. > As this is a potentially devastating issue we like to get some kind of fix to > our users asap. Yeah, a patched 1.1.3 package makes sense. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Mon Dec 18 07:05:20 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Mon, 18 Dec 2017 13:05:20 +0100 Subject: [Borgbackup] IMPORTANT: do not run borg 1.1.x check --repair In-Reply-To: References: <38e58af6-aa5d-e693-33cf-df1f290c3eff@waldmann-edv.de> <7a02faeb-9f77-a614-3b37-d8ddb5deed33@oss.schwarz.eu> Message-ID: <319d1614-8cc1-0328-4e50-f9a3bade6c91@oss.schwarz.eu> Am 17.12.2017 um 15:38 schrieb Thomas Waldmann: > If you have time, cherry-pick from #3444 (or even just add that b'path' > change as seen in the one changeset, the other stuff is optional, see > also the commit comments). Yes, the Fedora/EPEL package does exactly this. Is it possible to corrupt the metadata (easily) so I can verify myself that this bug is closed? fs From mfseeker at gmail.com Mon Dec 18 11:48:13 2017 From: mfseeker at gmail.com (Stan Armstrong) Date: Mon, 18 Dec 2017 12:48:13 -0400 Subject: [Borgbackup] (no subject) Message-ID: <4832f163-d600-3faf-ab7f-85b95493722e@gmail.com> I have daily, weekly, and monthly backups of my main Debian linux partition. I want to restore one of those to a new empty partition (or to a partition that has only a fresh Debian installation), and I want it to be bootable. Nothing I have tried so far works. Surely this is either simple or impossible to do. Can someone tell me which? From mfseeker at gmail.com Mon Dec 18 11:56:03 2017 From: mfseeker at gmail.com (Stan Armstrong) Date: Mon, 18 Dec 2017 12:56:03 -0400 Subject: [Borgbackup] Restore to a bootable Debian linux partition Message-ID: <8104b955-6d23-7833-3eb5-a684555a3acd@gmail.com> I have daily, weekly, and monthly backups of my main Debian linux partition. I want to restore one of those to a fresh bootable partition. Nothing I have tried so far works. Surely this is either simple or impossible to do. Can someone tell me which? Stan From public at enkore.de Mon Dec 18 12:24:03 2017 From: public at enkore.de (Marian Beermann) Date: Mon, 18 Dec 2017 18:24:03 +0100 Subject: [Borgbackup] Restore to a bootable Debian linux partition In-Reply-To: <8104b955-6d23-7833-3eb5-a684555a3acd@gmail.com> References: <8104b955-6d23-7833-3eb5-a684555a3acd@gmail.com> Message-ID: <46775168-8c1f-95a6-cec7-b86350a3d25e@enkore.de> On 18.12.2017 17:56, Stan Armstrong wrote: > I have daily, weekly, and monthly backups of my main Debian linux > partition. I want to restore one of those to a fresh bootable partition. > > Nothing I have tried so far works. So... what did you try? Your post doesn't say. From christian at detilly.net Wed Dec 20 16:16:16 2017 From: christian at detilly.net (Christian de Tilly) Date: Wed, 20 Dec 2017 22:16:16 +0100 Subject: [Borgbackup] Corrupted segment reference count - corrupted index or hints Message-ID: <3ee30984-f2e6-17f5-318c-dd735fcb5562@detilly.net> Hi, I have a problem which surprises me. At the end of a create, I have a Local Exception with the error in the title. However, the archive is correctly created as I can see it in 2 ways : the check of the repository detects no error and a mount of th last archive seems correct. Nevetheless, the create stops and the script is ended. Where is the corrupted segment and how can I correct it ? Here is my platform : Platform: Linux Dell 4.14.6-1-ARCH #1 SMP PREEMPT Thu Dec 14 21:26:16 UTC 2017 x86_64 Linux: arch Borg: 1.1.3? Python: CPython 3.6.3 PID: 7654? CWD: /home/christian sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '/mnt/NAS/Sauv/Depo::Dell-12-19:14h43', '/mnt/Data/Sauv/T?l?phone', '/mnt/Data/Agenda', '/mnt/D-Win/Tmp'] SSH_ORIGINAL_COMMAND: None In the attached file, there is all the story. I added a list of the repo. Thank's for your help, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- Synchronizing chunks cache... Archives: 19, w/ cached Idx: 18, w/ outdated Idx: 0, w/o cached Idx: 1. Reading cached archive chunk index for Dell-04-28:12h44 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-05-31:12h49 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-06-23:12h50 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-07-28:12h49 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-08-31:12h49 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-09-28:12h50 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-10-30:12h59 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-11-04:12h46 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-11-10:12h54 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-11-17:12h53 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-11-24:13h00 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-11-30:13h02 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-12-01:13h24 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-12-04:13h02 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-12-12:13h01 ... Merging into master chunks index ... Reading cached archive chunk index for Dell-12-15:15h23 ... Merging into master chunks index ... Fetching and building archive index for Dell-12-15:17h13 ... Merging into master chunks index ... Reading cached archive chunk index for S2017-02-28:22h37 ... Merging into master chunks index ... Reading cached archive chunk index for S2017-03-31:18h50 ... Merging into master chunks index ... Done. Local Exception Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 4157, in main exit_code = archiver.run(args) File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 4089, in run return set_ec(func(args)) File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 149, in wrapper return method(self, args, repository=repository, **kwargs) File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 544, in do_create create_inner(archive, cache) File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 509, in create_inner archive.save(comment=args.comment, timestamp=args.timestamp) File "/usr/lib/python3.6/site-packages/borg/archive.py", line 479, in save self.repository.commit() File "/usr/lib/python3.6/site-packages/borg/repository.py", line 412, in commit self.compact_segments() File "/usr/lib/python3.6/site-packages/borg/repository.py", line 759, in compact_segments assert segments[segment] == 0, 'Corrupted segment reference count - corrupted index or hints' AssertionError: Corrupted segment reference count - corrupted index or hints Platform: Linux Dell 4.14.6-1-ARCH #1 SMP PREEMPT Thu Dec 14 21:26:16 UTC 2017 x86_64 Linux: arch Borg: 1.1.3 Python: CPython 3.6.3 PID: 7654 CWD: /home/christian sys.argv: ['/usr/bin/borg', 'create', '-v', '--stats', '/mnt/NAS/Sauv/Depo::Dell-12-19:14h43', '/mnt/Data/Sauv/T?l?phone', '/mnt/Data/Agenda', '/mnt/D-Win/Tmp'] SSH_ORIGINAL_COMMAND: None # borg list $dpn Enter passphrase for key /mnt/NAS/Sauv/Depo: S2017-02-28:22h37 Tue, 2017-02-28 22:38:06 [f912977bee8e6e0533d3133e1718b990bcb29c78683c7415a22446459d710d76] S2017-03-31:18h50 Fri, 2017-03-31 18:50:03 [0c112dd94eaf4de4de3a725c8e876a67ca6dbcbc0431ac796629fb3ad03718cd] Dell-04-28:12h44 Fri, 2017-04-28 14:08:37 [62a0ad9c507c385d26c89cbb7848d539b570f43e139820f92dadf47924b8bb99] Dell-05-31:12h49 Wed, 2017-05-31 12:49:51 [6092b714d9510491c0ece2eaf7309923a44d76d77ebd7203e6b001b56d084009] Dell-06-23:12h50 Fri, 2017-06-23 12:50:37 [5926fe9a0f02a16443fa298ec4f7053e69d0177c99d6aec46680a0500993bc06] Dell-07-28:12h49 Fri, 2017-07-28 12:49:36 [e4c322652619ddb1557ac82a4d1ec12d380105f3afdc62135eb397daec06692f] Dell-08-31:12h49 Thu, 2017-08-31 12:49:47 [efe5adffe4f7d79edc7541551b85921a89b6388edfbc5cebadadb210cc596595] Dell-09-28:12h50 Thu, 2017-09-28 12:50:19 [3abccfd2b21a5ff0258837ad63223f2da458103b1fbb4b1df1b71a69f9898dba] Dell-10-30:12h59 Mon, 2017-10-30 12:59:52 [15383298fa9f2576981b98c920b9ab041b0714a246013174e8e7c300ae85eeb9] Dell-11-04:12h46 Sat, 2017-11-04 12:46:44 [0f6b46a2ae51f5b1672b913ddb1a63d44189af1047994aa07a44ff87cc205902] Dell-11-10:12h54 Fri, 2017-11-10 12:54:30 [7a4cfbde9d1a46924b6b4966ba4ca4d29ea49f8b0cbc9dc928d7079d0d109fa8] Dell-11-17:12h53 Fri, 2017-11-17 12:53:09 [d615e5c18eb20819e240be12675a17b416df917c02c10f4b608c4e05fe8bb78e] Dell-11-24:13h00 Fri, 2017-11-24 13:00:58 [1f3c7b1031079aa962efabcf6b3c9a22073cc5b777008f0f8fa7bab5d43ff7c6] Dell-11-30:13h02 Thu, 2017-11-30 14:28:01 [0bc5147c974a3dec1f59fd697aef51e66b6537710fa1ecb5150898e94889f262] Dell-12-01:13h24 Fri, 2017-12-01 13:24:49 [7747eaefea7906bdb2fbd395156a0e4300ff5ae58291eea389d1d971bd3f7867] Dell-12-04:13h02 Mon, 2017-12-04 13:02:54 [ea220901815dd7bd75152cc00c3d076d5c5b54e207f0ad8269392fa032fd0d8a] Dell-12-12:13h01 Tue, 2017-12-12 13:01:12 [4be246182ce6f968e03f2870399333755d1bf68271554fea31a1f97f8c43a34b] Dell-12-15:15h23 Fri, 2017-12-15 15:24:25 [711005f066d273d7bea88e371cdb0406dfc80b0c6af0da892439006c27d27f6b] Dell-12-15:17h13 Fri, 2017-12-15 17:14:46 [2555684019a2742661799b940a42d6da58b6d98a2f5eeeec401ce9f38521fc5d] Dell-12-19:14h43 Tue, 2017-12-19 14:43:53 [d78e4565845154c14c3acb40ce275c1a9e8d00f07a5cae4e0e9d590ab24d55dc] From tw at waldmann-edv.de Wed Dec 20 18:04:33 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 21 Dec 2017 00:04:33 +0100 Subject: [Borgbackup] Corrupted segment reference count - corrupted index or hints In-Reply-To: <3ee30984-f2e6-17f5-318c-dd735fcb5562@detilly.net> References: <3ee30984-f2e6-17f5-318c-dd735fcb5562@detilly.net> Message-ID: <6c6fc77f-d138-ecab-84b2-67ba0d7214b3@waldmann-edv.de> > I have a problem which surprises me. At the end of a create, I have a > Local Exception with the error in the title. Yup. borg create is actually a 2-step process: 1. create archive, commit 2. compact segments, commit It crashes for you in step 2, thus the archive is valid and committed. There might be just some non-compact segments because compact_segments crashes, which is unusual. You can try to delete the repo_dir/hints.* file and run borg check --repair afterwards USING BORG 1.1.4 OR A BUGFIXED VERSION OF 1.1.3. Do not use an unfixed 1.1.x as it might damage your archives (see my previous posts). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Sat Dec 30 07:22:20 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 30 Dec 2017 13:22:20 +0100 Subject: [Borgbackup] borgbackup 1.1.4 released! Message-ID: Released borgbackup 1.1.4 with a data corruption fix and other bug fixes. Also added zstd compression and some other small features. https://github.com/borgbackup/borg/releases/tag/1.1.4 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393