From borgbackup at list-post.ddt-consult.de Tue Apr 6 06:41:27 2021 From: borgbackup at list-post.ddt-consult.de (=?UTF-8?Q?Markus_Sch=c3=b6nhaber?=) Date: Tue, 6 Apr 2021 12:41:27 +0200 Subject: [Borgbackup] Borg vs. full/incremental backup Message-ID: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> Hello, I've been using duplicity/duply for quite some time. The concept there is the more "traditional" approach to create full backups once in a while and incremental backups in between. As a result of how those backup sets are created, if an incremental backup set gets corrupted somehow (disk failure or whatever), this doesn't affect previous incremental or full sets. If a full set gets corrupted other full sets and their dependent incremental sets aren't affected. As I understand it, that's different for a Borg repository. If a chunk in a repository gets corrupted, all files in all archives in this repository that reference this chunk are affected. Is my understanding correct? If yes, how do you cope with this / is there a "best practice"? Create multiple repositories? Use them in turn? Use them in parallel? Something else entirely? -- Regards mks From felix.schwarz at oss.schwarz.eu Tue Apr 6 08:55:52 2021 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Tue, 6 Apr 2021 14:55:52 +0200 Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> Message-ID: <6e645e98-40dd-4d9c-d467-d7211ba76daa@oss.schwarz.eu> Am 06.04.21 um 12:41 schrieb Markus Sch?nhaber: > As I understand it, that's different for a Borg repository. If a chunk > in a repository gets corrupted, all files in all archives in this > repository that reference this chunk are affected. > Is my understanding correct? Yes, that's my understanding as well. > If yes, how do you cope with this / is there a "best practice"? Create > multiple repositories? Use them in turn? Use them in parallel? Something > else entirely? This is what I'm doing: - run "borg check" regularly to detect data corruption - rsync the borg repo to a different data center - backup important data to a second borg repo From lazyvirus at gmx.com Tue Apr 6 10:09:13 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Tue, 6 Apr 2021 16:09:13 +0200 Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> Message-ID: <20210406160913.11223318@msi.defcon1.lan> On Tue, 6 Apr 2021 12:41:27 +0200 Markus Sch?nhaber wrote: > As I understand it, that's different for a Borg repository. If a chunk > in a repository gets corrupted, all files in all archives in this > repository that reference this chunk are affected. > Is my understanding correct? Yes. > If yes, how do you cope with this / is there a "best practice"? Create > multiple repositories? Use them in turn? Use them in parallel? > Something else entirely? Usually, you don't limit yourself to only one backup in one place and you want your backup data to be fully checked, so using a file system like ZFS, that ensure the redundancy AND the integrity of this data is something that you should do. Jean-Yves From borgbackup at list-post.ddt-consult.de Wed Apr 7 02:58:20 2021 From: borgbackup at list-post.ddt-consult.de (=?UTF-8?Q?Markus_Sch=c3=b6nhaber?=) Date: Wed, 7 Apr 2021 08:58:20 +0200 Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: <6e645e98-40dd-4d9c-d467-d7211ba76daa@oss.schwarz.eu> References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> <6e645e98-40dd-4d9c-d467-d7211ba76daa@oss.schwarz.eu> Message-ID: 06.04.21, 14:55 +0200, Felix Schwarz: > This is what I'm doing: > - run "borg check" regularly to detect data corruption Yes, I do that on a weekly basis. > - rsync the borg repo to a different data center > - backup important data to a second borg repo OK, I think that's what I'm going to do for now. I'll simply set up a second repository and make backups to both. That should be enough to help against a corrupted data chunk in a single backup target. In my case, the total amount of data to be backed up weighs a few 100GiB and therefore takes some time to back up as a whole, but the amount of data that changes over time is comparatively small. So daily backups with borg are pretty fast and it won't hurt to do them twice. Thanks Felix, -- Regards mks From borgbackup at list-post.ddt-consult.de Wed Apr 7 02:58:54 2021 From: borgbackup at list-post.ddt-consult.de (=?UTF-8?Q?Markus_Sch=c3=b6nhaber?=) Date: Wed, 7 Apr 2021 08:58:54 +0200 Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: <20210406160913.11223318@msi.defcon1.lan> References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> <20210406160913.11223318@msi.defcon1.lan> Message-ID: <06b77da4-afd0-9404-d248-34cc976c3a46@list-post.ddt-consult.de> 06.04.21, 16:09 +0200 Bzzzz: > Usually, you don't limit yourself to only one backup in one place and Of course. ATM I'm thinking about how to make a single backup target resilient to "bit rot", though. > you want your backup data to be fully checked, so using a file system > like ZFS, that ensure the redundancy AND the integrity of this data is > something that you should do. Good point! I'll consider this. Thanks Jean-Yves, -- Regards mks From l0f4r0 at tuta.io Wed Apr 7 07:47:50 2021 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Wed, 7 Apr 2021 13:47:50 +0200 (CEST) Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> <6e645e98-40dd-4d9c-d467-d7211ba76daa@oss.schwarz.eu> Message-ID: Hi, 7 avr. 2021, 08:58 de borgbackup at list-post.ddt-consult.de: > 06.04.21, 14:55 +0200, Felix Schwarz: > >> This is what I'm doing: >> - run "borg check" regularly to detect data corruption >> > Yes, I do that on a weekly basis. > Personnally, I do a `check ::borg_archive` after each single hourly backup (on a not-always connected USB drive). I mean I don't decorrelate backup and check. Since 1 backup+check takes (today) less than 1 hour it's ok but it's gonna be more complicated when it won't be the case anymore. Maybe I will accept 1 backup every 2 hours, I don't know yet (repo is locked as long as the backup or check is not finished)... If there is a best practice regarding checking, I will take it as well! > OK, I think that's what I'm going to do for now. I'll simply set up a > second repository and make backups to both. That should be enough to > help against a corrupted data chunk in a single backup target. > Try to put your 2 repos on different disks (different brands/models) and locations as well (in order to protect you against single point of failure - SPOF). Best regards, l0f4r0 From clickwir at gmail.com Wed Apr 7 11:11:02 2021 From: clickwir at gmail.com (Zack Coffey) Date: Wed, 7 Apr 2021 09:11:02 -0600 Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: <06b77da4-afd0-9404-d248-34cc976c3a46@list-post.ddt-consult.de> References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> <20210406160913.11223318@msi.defcon1.lan> <06b77da4-afd0-9404-d248-34cc976c3a46@list-post.ddt-consult.de> Message-ID: Markus, to help with the routine maintenance of Borgbackup, I use borgmatic. Have been for a few years and it's quite nice. I was writing my own scripts to handle things, then found borgmatic and that all pretty much went away. It's a really nice addition. https://torsion.org/borgmatic/ As for 'bit rot', backup to a file system that offers check sum protection as an extra layer beyond what Borgbackup does. I backup to a 2 drive RAID1 with btrfs. Borgbackup has it's own consistency checks, but so does btrfs. There's also this nice script to help run regular maintenance on btrfs that we use also. https://github.com/kdave/btrfsmaintenance You can choose to backup to 2+ locations or backup to one and then rsync it to another location. I've used both methods, both have Pros/Cons. My general preference is to let Borgbackup handle the multiple locations. The old way of 'full/incremental' backups is a hold over of a long time ago. It's still used but there are better ways. The more modern way (which is very similar) of 'initial/update' is a more fine tuned way. It's how we've been handling backups for over 15 years. We don't do occasional full backups, it's a waste of time and resources. The way Borgbackup handles this is a more modern and smarter approach. On Wed, Apr 7, 2021 at 12:58 AM Markus Sch?nhaber < borgbackup at list-post.ddt-consult.de> wrote: > 06.04.21, 16:09 +0200 Bzzzz: > > > Usually, you don't limit yourself to only one backup in one place and > > Of course. ATM I'm thinking about how to make a single backup target > resilient to "bit rot", though. > > > you want your backup data to be fully checked, so using a file system > > like ZFS, that ensure the redundancy AND the integrity of this data is > > something that you should do. > > Good point! I'll consider this. > > Thanks Jean-Yves, > > -- > Regards > mks > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From borgbackup at list-post.ddt-consult.de Wed Apr 7 11:57:58 2021 From: borgbackup at list-post.ddt-consult.de (=?UTF-8?Q?Markus_Sch=c3=b6nhaber?=) Date: Wed, 7 Apr 2021 17:57:58 +0200 Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> <20210406160913.11223318@msi.defcon1.lan> <06b77da4-afd0-9404-d248-34cc976c3a46@list-post.ddt-consult.de> Message-ID: <74b90828-ea36-fefc-e6ef-212174c8fa16@list-post.ddt-consult.de> 07.04.21, 17:11 +0200, Zack Coffey: > There's also this nice script to help run regular maintenance on btrfs that > we use also. > https://github.com/kdave/btrfsmaintenance Thanks for the pointer, I'll take a look at this. -- Regards mks From billk at iinet.net.au Wed Apr 7 18:57:18 2021 From: billk at iinet.net.au (William Kenworthy) Date: Thu, 8 Apr 2021 06:57:18 +0800 Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> <20210406160913.11223318@msi.defcon1.lan> <06b77da4-afd0-9404-d248-34cc976c3a46@list-post.ddt-consult.de> Message-ID: <3a80eaed-5f1e-8f85-507d-d5ccf535fecd@iinet.net.au> On 7/4/21 11:11 pm, Zack Coffey wrote: > Markus, to help with the routine maintenance of Borgbackup, I use > borgmatic. Have?been for a few years and it's quite nice. > ... > The old way of 'full/incremental' backups is a hold over of a long > time ago. It's still used but there are better ways.? > The more modern way (which is very similar) of 'initial/update' is a > more fine tuned way. It's how we've been handling backups for over 15 > years.? > We don't do occasional full backups, it's a waste of time and > resources. The way Borgbackup handles this is a more modern and > smarter approach. > > I agree with this - there are smarter ways these days.? Last week I had to recover a few terabytes from offline backup due to a poorly thought out rm wildcard taking out most of the online borg repo as well as the data it had backed up. The online backups are ~20 repos for individual machines and data stores on a moosefs cluster with a goal of 4 - fairly bulletproof except for the finger trouble. The offline is a ~daily manual borgbackup of the online backups to a single repo on a 2Tb removable disk - there is a lot of duplication across the individual repos giving a very good deduplication ratio that allows a stupidly high amount of original data on the 2Tb disk. Restoration was trouble free and didnt take an excessive amount of time so I am happy with the level of protection and the appropriate safety of the data.? Whilst the above incident has been the worst in awhile, borgbackup has been very useful for me! Using traditional incrementals and trying to work forward from the last full backup would have been a lot more messy and time consuming with (in my experience) a higher chance of failure. BillK From l0f4r0 at tuta.io Thu Apr 8 18:28:52 2021 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Fri, 9 Apr 2021 00:28:52 +0200 (CEST) Subject: [Borgbackup] Borg vs. full/incremental backup In-Reply-To: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> References: <3108b58a-4461-d6dd-aa6d-67dbd128a331@list-post.ddt-consult.de> Message-ID: Hi, 6 avr. 2021, 12:41 de borgbackup at list-post.ddt-consult.de: > As I understand it, that's different for a Borg repository. If a chunk > in a repository gets corrupted, all files in all archives in this > repository that reference this chunk are affected. > Is my understanding correct? > If yes, how do you cope with this / is there a "best practice"? Create > multiple repositories? Use them in turn? Use them in parallel? Something > else entirely? > https://github.com/borgbackup/borg/issues/225?is interesting regarding this aspect. Some of the mentioned workarounds have already been discussed here. Best regards, l0f4r0 From florian at whnr.de Wed Apr 28 09:24:13 2021 From: florian at whnr.de (Florian Wehner) Date: Wed, 28 Apr 2021 09:24:13 -0400 Subject: [Borgbackup] Could compression of next chunk be done during transmission of last? Message-ID: <42206F48-125E-4D80-A5F7-17BF94A49023@whnr.de> Hello! I observed my last large backup and saw the following sequential order: 1) Borg compresses a bunch of files 2) That chunk get transmitted through the network (max. ~16 MB/s) 3) Repeat 1 or get some data back from the remote after a while Could the compression run in parallel with the transmission? The queue could only be one chunk deep, but that would cut transmission time by almost 50% in my observed case. ?Flo -- Florian Wehner +1 (857) 234 6798 From oh at dom.de Fri May 7 10:18:02 2021 From: oh at dom.de (Oliver Hoffmann) Date: Fri, 7 May 2021 16:18:02 +0200 Subject: [Borgbackup] Disaster Recovery or backup of backup Message-ID: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> Hi all, I use borg for quite a while now and it works just fine. Now that I'm going to set up two Backup servers with approximately 20 TB of backup data and roughly 40 clients each I wonder how to prevent data loss. In other words how do I prepare for a disaster? Meaning system and data gone. Simply rsyncing all folders to a nas or something and setting up a new borg server won't work and two independent backups as suggested means to much traffic for the network as well as for some clients. There is simply not enough time for a double backup. So, here comes my approach. I intend to use xen or better xcp-ng and on top the actual borg server as a VM. Snapshots/exports will be made every night and kept somewhere else. The data will be rsynced off the server too. In case of a disaster I just need a similar or identical HW, set up xen, import the saved VM and copy all the borg folders with the repos back. That way I'd have data and corresponding borg server put together again. Or I resurrect the server on some other xen host and access the data via nfs which would be quicker in case of an urgent restore. Do I miss something or does that sound feasible? Thank for your thoughts, Oliver -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From tw at waldmann-edv.de Sat May 8 14:56:01 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 8 May 2021 20:56:01 +0200 Subject: [Borgbackup] Disaster Recovery or backup of backup In-Reply-To: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> References: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> Message-ID: > Now that I'm going to set up two Backup servers with approximately 20 TB > of backup data and roughly 40 clients each I wonder how to prevent data > loss. Are they at same place or different places? Mutual backup? > Simply rsyncing all folders to a nas or something and setting up a new > borg server won't work Why? > and two independent backups as suggested means to > much traffic for the network as well as for some clients. There is > simply not enough time for a double backup. Did you actually try that? Except the first backup, backups are usually rather quick and not causing much traffic. Do you use 1 repo per client? > So, here comes my approach. I intend to use xen or better xcp-ng and on > top the actual borg server as a VM. Snapshots/exports will be made every > night and kept somewhere else. The data will be rsynced off the server > too. In case of a disaster I just need a similar or identical HW, set up > xen, import the saved VM and copy all the borg folders with the repos > back. That way I'd have data and corresponding borg server put together > again. More complex, but you know better the effort you need to do to get it going again. One can also recover from a full file backup of a simple linux system (no hypervisor, no VMs) using a usb live system, if you are used to it. > Or I resurrect the server on some other xen host and access the > data via nfs which would be quicker in case of an urgent restore. That would mean the data goes over the network twice. Faster to get it up again, but if a big restore using that is faster in the end has to be seen. For normal production, I'ld avoid having the borg repo on NFS, but rather have it on local storage. Besides the above, you could also prevent some issues: Use zfs (mirror, raid-z6) so a single disk failure or bad sector does not cause a need for a recovery from backup. Also ECC memory, so RAM issues do not corrupt your backups. Of course that is not a replacement for a 2nd backup, but reduces the frequency when you'll actually have to use it. As the FAQ explains also, be careful with error propagation and crypto when using rsync to make repo copies. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Sat May 8 17:36:01 2021 From: public at enkore.de (d0) Date: Sat, 8 May 2021 23:36:01 +0200 Subject: [Borgbackup] Disaster Recovery or backup of backup In-Reply-To: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> References: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> Message-ID: Hello Oliver, have you considered a two-tier approach, using Borg to collect backups from your clients and then protecting those backups using filesystem-level snapshots (both btrfs and ZFS can send/receive snapshots from other hosts). That would allow you to work around the lack of replication facilities in Borg itself. Cheers, Marian Am Fr., 7. Mai 2021 um 16:18 Uhr schrieb Oliver Hoffmann : > Hi all, > > > I use borg for quite a while now and it works just fine. > > Now that I'm going to set up two Backup servers with approximately 20 TB > of backup data and roughly 40 clients each I wonder how to prevent data > loss. In other words how do I prepare for a disaster? Meaning system and > data gone. > > Simply rsyncing all folders to a nas or something and setting up a new > borg server won't work and two independent backups as suggested means to > much traffic for the network as well as for some clients. There is > simply not enough time for a double backup. > > So, here comes my approach. I intend to use xen or better xcp-ng and on > top the actual borg server as a VM. Snapshots/exports will be made every > night and kept somewhere else. The data will be rsynced off the server > too. In case of a disaster I just need a similar or identical HW, set up > xen, import the saved VM and copy all the borg folders with the repos > back. That way I'd have data and corresponding borg server put together > again. Or I resurrect the server on some other xen host and access the > data via nfs which would be quicker in case of an urgent restore. > > Do I miss something or does that sound feasible? > > Thank for your thoughts, > > Oliver > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hm at hmerkl.com Sun May 9 12:28:27 2021 From: hm at hmerkl.com (Hans Merkl) Date: Sun, 9 May 2021 09:28:27 -0700 Subject: [Borgbackup] Can't figure out why files are being detected as modified (files created by backintime) Message-ID: I am pretty new to Linux and borg in particular. Overall things work really well but there is one thing I can't figure out. I have a folder created with backups created by backintime. I am now trying to back this up to another disk. Every time I run the backup borg detects all files as modified and reads them. The deduplication avoids writing new data but it still reads 500GB every time I run borg. I have set "-files-cache mtime,size" bit it still detects all files as changed. Are there any other properties that make borg think a file may have been changed? -------------- next part -------------- An HTML attachment was scrubbed... URL: From billk at iinet.net.au Mon May 10 00:01:47 2021 From: billk at iinet.net.au (William Kenworthy) Date: Mon, 10 May 2021 12:01:47 +0800 Subject: [Borgbackup] delete data from an archive? Message-ID: An HTML attachment was scrubbed... URL: From oh at dom.de Mon May 10 07:39:23 2021 From: oh at dom.de (Oliver Hoffmann) Date: Mon, 10 May 2021 13:39:23 +0200 Subject: [Borgbackup] Disaster Recovery or backup of backup In-Reply-To: References: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> Message-ID: <35791ad3-b1eb-c7e4-bfe8-ff417337a6aa@dom.de> >> Now that I'm going to set up two Backup servers with approximately 20 TB >> of backup data and roughly 40 clients each I wonder how to prevent data >> loss. > > Are they at same place or different places? Mutual backup? Yes, same place. Two servers give more flexibility and a kind of redundancy as well. I could move around VMs or data for example. > >> Simply rsyncing all folders to a nas or something and setting up a new >> borg server won't work > > Why? As I understood that way you can't access your backups anymore, because the repos are encrypted by the server which made the backups. > >> and two independent backups as suggested means to >> much traffic for the network as well as for some clients. There is >> simply not enough time for a double backup. > > Did you actually try that? Except the first backup, backups are usually > rather quick and not causing much traffic. > Yes, I know what's possible in my setup. Borg is quick, yes but it's just too many clients/too much data. > Do you use 1 repo per client? > Yes, each client has its own. >> So, here comes my approach. I intend to use xen or better xcp-ng and on >> top the actual borg server as a VM. Snapshots/exports will be made every >> night and kept somewhere else. The data will be rsynced off the server >> too. In case of a disaster I just need a similar or identical HW, set up >> xen, import the saved VM and copy all the borg folders with the repos >> back. That way I'd have data and corresponding borg server put together >> again. > > More complex, but you know better the effort you need to do to get it > going again. > Getting everything back to normal would mean replacing the HW, then putting xcp-ng on it, importing the VM (the actual borg server), some tweaks and it's done. > One can also recover from a full file backup of a simple linux system > (no hypervisor, no VMs) using a usb live system, if you are used to it. > Possible but usb sticks tend to fail. >> Or I resurrect the server on some other xen host and access the >> data via nfs which would be quicker in case of an urgent restore. > > That would mean the data goes over the network twice. Faster to get it > up again, but if a big restore using that is faster in the end has to be > seen. For normal production, I'ld avoid having the borg repo on NFS, but > rather have it on local storage. The thing here is, that the xen host itself is the nfs server. That way the data is kinda locally. Having a separate nfs server somewhere else wouldn't be a good idea, I agree. > > Besides the above, you could also prevent some issues: > > Use zfs (mirror, raid-z6) so a single disk failure or bad sector does > not cause a need for a recovery from backup. Also ECC memory, so RAM > issues do not corrupt your backups. > I intend to put everything on proper servers. So ECC, a decent raid controller, SAS HDs, etc are already there. In the end it'll be raid6 + hotspare. Nothing against zfs but you need to use something all other people involved can agree on. And that is in my case xen and "classic" raids. > Of course that is not a replacement for a 2nd backup, but reduces the > frequency when you'll actually have to use it. > > > As the FAQ explains also, be careful with error propagation and crypto > when using rsync to make repo copies. > > OK, I'll look into that. Thank you for your thoughts! Cheers, Oliver -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From tw at waldmann-edv.de Mon May 10 08:26:31 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 10 May 2021 14:26:31 +0200 Subject: [Borgbackup] delete data from an archive? In-Reply-To: References: Message-ID: <082207a6-5fc2-ff71-0eb9-d76b4358e9de@waldmann-edv.de> > Is it possible to delete data from an archive? - I am running out of > space on the hard drive I am using for offline backups and would like to > remove some rather large directories ()~200Gb or so) within the archive > that are effectively duplicated data, just not in a form that borgbackup > de-duplicates. "borg recreate --exclude" or "--pattern" can do that, but it requires some free space. be very careful with this (do "--dry-run --list" first) as this could unintentionally remove data you wanted to keep, IF you specify wrong exclude patterns. > The only way I can see to do this is to export, delete then run > borgbackup on it again which will take many days to do - is there > another way? yes, recreate. it will still take quite some time. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From felix.schwarz at oss.schwarz.eu Mon May 10 07:53:24 2021 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Mon, 10 May 2021 13:53:24 +0200 Subject: [Borgbackup] Disaster Recovery or backup of backup In-Reply-To: <35791ad3-b1eb-c7e4-bfe8-ff417337a6aa@dom.de> References: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> <35791ad3-b1eb-c7e4-bfe8-ff417337a6aa@dom.de> Message-ID: Am 10.05.21 um 13:39 schrieb Oliver Hoffmann: >>> Simply rsyncing all folders to a nas or something and setting up a new >>> borg server won't work >> >> Why? > > As I understood that way you can't access your backups anymore, because > the repos are encrypted by the server which made the backups. Well you can also copy the repo-specific secret key to a save location and load that key to the machine which needs to restore from that borg repo :-) >>> and two independent backups as suggested means to >>> much traffic for the network as well as for some clients. There is >>> simply not enough time for a double backup. >> >> Did you actually try that? Except the first backup, backups are usually >> rather quick and not causing much traffic. You might have to look at remote storage replication if you really have too much data to backup. Keep in mind that restoring from a borg backup also needs some time as borg does not utilize all server cores so restoring a couple of TB will take a while. Felix From tw at waldmann-edv.de Mon May 10 08:37:45 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 10 May 2021 14:37:45 +0200 Subject: [Borgbackup] Can't figure out why files are being detected as modified (files created by backintime) In-Reply-To: References: Message-ID: <30ddfb92-21af-2bcc-36af-90bf34cc5f59@waldmann-edv.de> > I have a folder created with backups created by backintime. I am now > trying to back this up to another disk. Every time I run the backup borg > detects all files as modified and reads them. The deduplication avoids > writing new data but it still reads 500GB every time I run borg. > > I have set? "-files-cache mtime,size" bit it still detects all files as > changed. Are there any other properties?that make borg think a file may > have been changed? Did you check the FAQ about this? In your case (--files-cache=mtime,size), only this is relevant: - mtime - size - full absolute path of the file Also, when changing --files-cache type, you always need to do >= 2 backups to see whether it works. If you have a lot of different backup sets, BORG_FILES_CACHE_TTL might be relevant (if the default is too low for your use case). If you use BORG_FILES_CACHE_SUFFIX badly, that could also make it malfunction. From borg 1.1.16 changelog entry: verbose files cache logging via ?debug-topic=files_cache, #5659. Use this if you suspect that borg does not detect unmodified files as expected. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From oh at dom.de Mon May 10 08:53:20 2021 From: oh at dom.de (Oliver Hoffmann) Date: Mon, 10 May 2021 14:53:20 +0200 Subject: [Borgbackup] Disaster Recovery or backup of backup In-Reply-To: References: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> <35791ad3-b1eb-c7e4-bfe8-ff417337a6aa@dom.de> Message-ID: <78d06335-bd2b-3a9c-6ffe-0ef4b42f9bf3@dom.de> > > Am 10.05.21 um 13:39 schrieb Oliver Hoffmann: >>>> Simply rsyncing all folders to a nas or something and setting up a new >>>> borg server won't work >>> >>> Why? >> >> As I understood that way you can't access your backups anymore, because >> the repos are encrypted by the server which made the backups. > > Well you can also copy the repo-specific secret key to a save location > and load that key to the machine which needs to restore from that borg > repo :-) > Thank you, Felix! I didn't think about this simple solution. >>>> and two independent backups as suggested means to >>>> much traffic for the network as well as for some clients. There is >>>> simply not enough time for a double backup. >>> >>> Did you actually try that? Except the first backup, backups are usually >>> rather quick and not causing much traffic. > > You might have to look at remote storage replication if you really have > too much data to backup. Keep in mind that restoring from a borg backup > also needs some time as borg does not utilize all server cores so > restoring a couple of TB will take a while. > > Felix It is a bit too much right now, but later with two new systems it wont ;) Oliver -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From oh at dom.de Mon May 10 11:20:54 2021 From: oh at dom.de (Oliver Hoffmann) Date: Mon, 10 May 2021 17:20:54 +0200 Subject: [Borgbackup] Disaster Recovery or backup of backup In-Reply-To: References: <40324cf2-93a9-df66-62db-a5d6f2a3f89c@dom.de> Message-ID: <9617e686-0c70-d4f0-cc4a-223ed9354b0c@dom.de> Hi Marian, I considered that. Well, theoretically. I'll just try out different things once the HW arrived and let you know how well it works. Thanks all! > Hello Oliver, > > have you considered a two-tier approach, using Borg to collect backups > from your clients and then protecting those backups using > filesystem-level snapshots (both btrfs and ZFS can send/receive > snapshots from other hosts). That would allow you to work around the > lack of replication facilities in Borg itself. > > Cheers, Marian > > Am Fr., 7. Mai 2021 um 16:18?Uhr schrieb Oliver Hoffmann >: > > Hi all, > > > I use borg for quite a while now and it works just fine. > > Now that I'm going to set up two Backup servers with approximately 20 TB > of backup data and roughly 40 clients each I wonder how to prevent data > loss. In other words how do I prepare for a disaster? Meaning system and > data gone. > > Simply rsyncing all folders to a nas or something and setting up a new > borg server won't work and two independent backups as suggested means to > much traffic for the network as well as for some clients. There is > simply not enough time for a double backup. > > So, here comes my approach. I intend to use xen or better xcp-ng and on > top the actual borg server as a VM. Snapshots/exports will be made every > night and kept somewhere else. The data will be rsynced off the server > too. In case of a disaster I just need a similar or identical HW, set up > xen, import the saved VM and copy all the borg folders with the repos > back. That way I'd have data and corresponding borg server put together > again. Or I resurrect the server on some other xen host and access the > data via nfs which would be quicker in case of an urgent restore. > > Do I miss something or does that sound feasible? > > Thank for your thoughts, > > Oliver > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From david at digitaltransitions.ca Wed May 12 16:07:57 2021 From: david at digitaltransitions.ca (David Thompson) Date: Wed, 12 May 2021 16:07:57 -0400 Subject: [Borgbackup] Borgbackup in a jail on linux Message-ID: <74BBFCC8-DD03-4CD7-947C-7C376786CF25@digitaltransitions.ca> Hi all, I?ve been playing around with jails and I am hoping to build out a jail with borgbackup as a part of it. I?m doing this for a client in order to separate their individual backups and only allow a small subset of commands on their backup server. I cannot figure out how to actually put borg into a jail as I am unable to figure out all the dependencies that need to be moved. This is on a Debian 10 machine. I am trying to hunt down any how too's on how to do just this and move borgbackup into a jail if possible. Any help would be greatly appreciated! Thank you From lazyvirus at gmx.com Wed May 12 16:16:12 2021 From: lazyvirus at gmx.com (Bzzzz) Date: Wed, 12 May 2021 22:16:12 +0200 Subject: [Borgbackup] Borgbackup in a jail on linux In-Reply-To: <74BBFCC8-DD03-4CD7-947C-7C376786CF25@digitaltransitions.ca> References: <74BBFCC8-DD03-4CD7-947C-7C376786CF25@digitaltransitions.ca> Message-ID: <20210512221612.0c47e198@msi.defcon1.lan> On Wed, 12 May 2021 16:07:57 -0400 David Thompson via Borgbackup wrote: > I?ve been playing around with jails and I am hoping to build out a > jail with borgbackup as a part of it. I?m doing this for a client in > order to separate their individual backups and only allow a small > subset of commands on their backup server. I cannot figure out how to > actually put borg into a jail as I am unable to figure out all the > dependencies that need to be moved. > > This is on a Debian 10 machine. > > I am trying to hunt down any how too's on how to do just this and move > borgbackup into a jail if possible. Any help would be greatly > appreciated! apt install apt-rdepends And how do you intend to go out of the jail to get the _FS_ files to be backup? Jean-Yves From tw at waldmann-edv.de Thu May 13 03:23:01 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 13 May 2021 09:23:01 +0200 Subject: [Borgbackup] borgbackup release 1.2.0b3 Message-ID: <17b0c480-2c65-0c93-3d4a-88a514ecfc22@waldmann-edv.de> Released borgbackup 1.2.0b3 with some fixes and new features: Please help testing: https://github.com/borgbackup/borg/releases/tag/1.2.0b3 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From david at digitaltransitions.ca Thu May 13 08:13:49 2021 From: david at digitaltransitions.ca (David Thompson) Date: Thu, 13 May 2021 08:13:49 -0400 Subject: [Borgbackup] Borgbackup in a jail on linux In-Reply-To: <20210512221612.0c47e198@msi.defcon1.lan> References: <74BBFCC8-DD03-4CD7-947C-7C376786CF25@digitaltransitions.ca> <20210512221612.0c47e198@msi.defcon1.lan> Message-ID: <090A445A-C5DD-4D3F-8EAA-13F5CFD55E76@digitaltransitions.ca> Hey thanks for the reply and sorry. I didn?t know about the apt-rdepends so thats great and I?ll check that out. Thanks you. > And how do you intend to go out of the jail to get the _FS_ files to be backup? ^ Sorry, I?m not sure I understand you mean here by this statement. > On May 12, 2021, at 4:16 PM, Bzzzz wrote: > > On Wed, 12 May 2021 16:07:57 -0400 > David Thompson via Borgbackup wrote: > >> I?ve been playing around with jails and I am hoping to build out a >> jail with borgbackup as a part of it. I?m doing this for a client in >> order to separate their individual backups and only allow a small >> subset of commands on their backup server. I cannot figure out how to >> actually put borg into a jail as I am unable to figure out all the >> dependencies that need to be moved. >> >> This is on a Debian 10 machine. >> >> I am trying to hunt down any how too's on how to do just this and move >> borgbackup into a jail if possible. Any help would be greatly >> appreciated! > > apt install apt-rdepends > > And how do you intend to go out of the jail to get the _FS_ files to be > backup? > > Jean-Yves > From jolson at kth.se Thu May 13 17:11:33 2021 From: jolson at kth.se (Jonas Olson) Date: Thu, 13 May 2021 23:11:33 +0200 Subject: [Borgbackup] With the suggested server setup, how does a user list all repositories? Message-ID: <7a386989-60f9-90a3-9909-cd1f52bacd3a@kth.se> With the suggested setup for a Borg server [0], a user can create multiple repositories. Once created, it seems the repositories are invisible to the user, who can use them only by remembering their names. How can a user see what repositories exist? (Also, how can a repository be deleted?) Regards Jonas Olson [0] From tw at waldmann-edv.de Sat May 15 16:33:11 2021 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 15 May 2021 22:33:11 +0200 Subject: [Borgbackup] With the suggested server setup, how does a user list all repositories? In-Reply-To: <7a386989-60f9-90a3-9909-cd1f52bacd3a@kth.se> References: <7a386989-60f9-90a3-9909-cd1f52bacd3a@kth.se> Message-ID: <2cae356b-2919-9041-fa71-4424c3981f63@waldmann-edv.de> > With the suggested setup for a Borg server [0], a user can create > multiple repositories. Once created, it seems the repositories are > invisible to the user, who can use them only by remembering their names. > How can a user see what repositories exist? (Also, how can a repository > be deleted?) Guess the answer is that borg does not really care for managing stuff above the repository directory level. If you have shell access, you can of course just look around manually. You can delete a repo using "borg delete REPO" with REPO being the repo path or URL. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From jolson at kth.se Wed May 19 11:40:05 2021 From: jolson at kth.se (Jonas Olson) Date: Wed, 19 May 2021 17:40:05 +0200 Subject: [Borgbackup] Rationale for keeping a copy of the repo key Message-ID: <81faab86-853a-08ae-7ff6-51157905c9e2@kth.se> It is recommended [0] that one stores a copy of the encrypted repo key in a safe place. Naively, this would seem unnecessary, as it is stored in the repository and no copy is normally needed. Is the reasoning that, in case of filesystem corruption, you still have some chance of saving parts of the backup as long as the key is safe? The documentation doesn't spell it out, as far as I have been able to see, and I'd like to make sure I understand it correctly. Regards, Jonas Olson [0] "Make a backup copy of the key file (keyfile mode) or repo config file (repokey mode) and keep it at a safe place, so you still have the key in case it gets corrupted or lost. Also keep the passphrase at a safe place. The backup that is encrypted with that key won?t help you with that, of course." From public at enkore.de Wed May 19 12:23:24 2021 From: public at enkore.de (d0) Date: Wed, 19 May 2021 18:23:24 +0200 Subject: [Borgbackup] Rationale for keeping a copy of the repo key In-Reply-To: <81faab86-853a-08ae-7ff6-51157905c9e2@kth.se> References: <81faab86-853a-08ae-7ff6-51157905c9e2@kth.se> Message-ID: Encryption keys have a tremendous amount of error leverage (similar to filesystem metadata, but harder to fix when corrupted): corrupting a very small amount of data makes a very large amount of data unusable. So it makes sense to have a backup here. Cheers, Marian Am Mi., 19. Mai 2021 um 17:48 Uhr schrieb Jonas Olson : > It is recommended [0] that one stores a copy of the encrypted repo key > in a safe place. Naively, this would seem unnecessary, as it is stored > in the repository and no copy is normally needed. Is the reasoning that, > in case of filesystem corruption, you still have some chance of saving > parts of the backup as long as the key is safe? The documentation > doesn't spell it out, as far as I have been able to see, and I'd like to > make sure I understand it correctly. > > Regards, > Jonas Olson > > [0] "Make a backup copy of the key file (keyfile mode) or repo config > file (repokey mode) and keep it at a safe place, so you still have the > key in case it gets corrupted or lost. Also keep the passphrase at a > safe place. The backup that is encrypted with that key won?t help you > with that, of course." > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cherio at gmail.com Sun Jun 27 23:05:21 2021 From: cherio at gmail.com (Cherio) Date: Sun, 27 Jun 2021 23:05:21 -0400 Subject: [Borgbackup] Remote backups via ssh - OpenSSL version mismatch Message-ID: At some point (I can pinpoint exactly when) borg stopped working with remote targets via ssh. The requests don't even seem to be leaving the machine that starts the backup and halts with a message "Remote: OpenSSL version mismatch. Built against 1010106f, you have 101000cf". The request doesn't seem to be even reaching the remote backup machine. I run borg 1.1.15 on Ubuntu 20.04. I tried standalone binaries both from borg-linux64.tgz and as a single binary borg-linux64 with the same result. I also tried downloading 1.1.16 and it was having the exact same issue. Below are some version tests. I can't figure out where OpenSSL version 101000cf comes from. > openssl version OpenSSL 1.1.1f 31 Mar 2020 > ssh -V OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020 > python Python 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0] on linux >>> import ssl >>> ssl.OPENSSL_VERSION 'OpenSSL 1.1.1f 31 Mar 2020' I am hoping someone here may have an immediate light bulb and point me in the right direction. -------------- next part -------------- An HTML attachment was scrubbed... URL: