[Borgbackup] Disaster Recovery or backup of backup
Oliver Hoffmann
oh at dom.de
Mon May 10 07:39:23 EDT 2021
>> Now that I'm going to set up two Backup servers with approximately 20 TB
>> of backup data and roughly 40 clients each I wonder how to prevent data
>> loss.
>
> Are they at same place or different places? Mutual backup?
Yes, same place. Two servers give more flexibility and a kind of
redundancy as well. I could move around VMs or data for example.
>
>> Simply rsyncing all folders to a nas or something and setting up a new
>> borg server won't work
>
> Why?
As I understood that way you can't access your backups anymore, because
the repos are encrypted by the server which made the backups.
>
>> and two independent backups as suggested means to
>> much traffic for the network as well as for some clients. There is
>> simply not enough time for a double backup.
>
> Did you actually try that? Except the first backup, backups are usually
> rather quick and not causing much traffic.
>
Yes, I know what's possible in my setup. Borg is quick, yes but it's
just too many clients/too much data.
> Do you use 1 repo per client?
>
Yes, each client has its own.
>> So, here comes my approach. I intend to use xen or better xcp-ng and on
>> top the actual borg server as a VM. Snapshots/exports will be made every
>> night and kept somewhere else. The data will be rsynced off the server
>> too. In case of a disaster I just need a similar or identical HW, set up
>> xen, import the saved VM and copy all the borg folders with the repos
>> back. That way I'd have data and corresponding borg server put together
>> again.
>
> More complex, but you know better the effort you need to do to get it
> going again.
>
Getting everything back to normal would mean replacing the HW, then
putting xcp-ng on it, importing the VM (the actual borg server), some
tweaks and it's done.
> One can also recover from a full file backup of a simple linux system
> (no hypervisor, no VMs) using a usb live system, if you are used to it.
>
Possible but usb sticks tend to fail.
>> Or I resurrect the server on some other xen host and access the
>> data via nfs which would be quicker in case of an urgent restore.
>
> That would mean the data goes over the network twice. Faster to get it
> up again, but if a big restore using that is faster in the end has to be
> seen. For normal production, I'ld avoid having the borg repo on NFS, but
> rather have it on local storage.
The thing here is, that the xen host itself is the nfs server. That way
the data is kinda locally. Having a separate nfs server somewhere else
wouldn't be a good idea, I agree.
>
> Besides the above, you could also prevent some issues:
>
> Use zfs (mirror, raid-z6) so a single disk failure or bad sector does
> not cause a need for a recovery from backup. Also ECC memory, so RAM
> issues do not corrupt your backups.
>
I intend to put everything on proper servers. So ECC, a decent raid
controller, SAS HDs, etc are already there. In the end it'll be raid6 +
hotspare.
Nothing against zfs but you need to use something all other people
involved can agree on. And that is in my case xen and "classic" raids.
> Of course that is not a replacement for a 2nd backup, but reduces the
> frequency when you'll actually have to use it.
>
>
> As the FAQ explains also, be careful with error propagation and crypto
> when using rsync to make repo copies.
>
>
OK, I'll look into that.
Thank you for your thoughts!
Cheers,
Oliver
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 203 bytes
Desc: OpenPGP digital signature
URL: <https://mail.python.org/pipermail/borgbackup/attachments/20210510/27c599c1/attachment.sig>
More information about the Borgbackup
mailing list