From dave at gasaway.org Thu Jul 12 11:57:18 2018 From: dave at gasaway.org (David Gasaway) Date: Thu, 12 Jul 2018 08:57:18 -0700 Subject: [Borgbackup] Recovery from missing segments In-Reply-To: References: <4251872c-73d3-895e-c02d-61289d4d8179@waldmann-edv.de> Message-ID: On Wed, Jun 27, 2018 at 8:38 AM, David Gasaway wrote: > Yes, but I'm trying to find a check mode that will repair this > condition without reading the segment files, as that would take a very > long time. Not that anyone cares, but I went ahead with a 'borg check --repair'. It took over 3 days, thankfully with no network issues in that time. The most stressful part was I had no good way to gauge how far it progressed. The command gave no output for 3 days, possibly because it didn't find any issues until the end. Perhaps something could be added to write output every so many GiB or chunks. > Hypothetical mode would simply > check the filesystem for the expected segment files, and if not found, > replace the associated chunks in the index with zero chunks so the > chunks/segments get recreated at the next backup (assuming source data > is still present). On second thought, I may also need it to do something like check the expected length of segment files in case a new chunk was added to an existing segment, but the segment was "rolled back". The more I think about this though, what I'd probably want is a way to invalidate any chunk created after a certain date/time. Not sure if this is possible. Still, I've taken steps to avoid this in the future. Most critically, I upgraded s3ql which added new features to avoid this kind of loss. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org From tve at voneicken.com Mon Jul 16 22:06:39 2018 From: tve at voneicken.com (Thorsten von Eicken) Date: Tue, 17 Jul 2018 02:06:39 +0000 Subject: [Borgbackup] what are the implications of this repair? Message-ID: <01000164a5fd51fa-5bbbaffa-87fa-4fab-bcf7-8dc2ddae8bc5-000000@email.amazonses.com> I've had filesystem corruption on a backup disk and I've run borg check --repair with the result that I got some messages like the following one: big/MusicFLAC/_Thorsten/lesson2/02 Unknown - Track 02.mp3: New missing file chunk detected (Byte 0-1269141). Replacing with all-zero chunk. I'm now wondering what this means for the next backup I'm making. I still have that file at the source, so when I do a borg create will it back up the file fresh or will it think it already has it and dedup it including the zero block? Thanks! Thorsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Jul 17 17:09:20 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 17 Jul 2018 23:09:20 +0200 Subject: [Borgbackup] Recovery from missing segments In-Reply-To: References: <4251872c-73d3-895e-c02d-61289d4d8179@waldmann-edv.de> Message-ID: <5833d951-9ebf-34f5-96eb-03bc5c8fce76@waldmann-edv.de> > Not that anyone cares, but I went ahead with a 'borg check --repair'. > It took over 3 days, thankfully with no network issues in that time. > The most stressful part was I had no good way to gauge how far it > progressed. Did you use --progress? Your borg version (on client, on server)? >> Hypothetical mode would simply >> check the filesystem for the expected segment files, and if not found, >> replace the associated chunks in the index with zero chunks so the >> chunks/segments get recreated at the next backup (assuming source data >> is still present). > > On second thought, I may also need it to do something like check the > expected length of segment files in case a new chunk was added to an > existing segment, but the segment was "rolled back". The more I think > about this though, what I'd probably want is a way to invalidate any > chunk created after a certain date/time. Not sure if this is > possible. If borg check --repair determines that a chunk is not there any more, it puts a same-length replacement chunk there, but it still remembers the healthy chunkid (content hash). See my other post from today about how it can later use that for healing. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Tue Jul 17 17:03:06 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 17 Jul 2018 23:03:06 +0200 Subject: [Borgbackup] what are the implications of this repair? In-Reply-To: <01000164a5fd51fa-5bbbaffa-87fa-4fab-bcf7-8dc2ddae8bc5-000000@email.amazonses.com> References: <01000164a5fd51fa-5bbbaffa-87fa-4fab-bcf7-8dc2ddae8bc5-000000@email.amazonses.com> Message-ID: On 17.07.2018 04:06, Thorsten von Eicken wrote: > I've had filesystem corruption on a backup disk and I've run borg check > --repair with the result that I got some messages like the following one: > > big/MusicFLAC/_Thorsten/lesson2/02 Unknown - Track 02.mp3: New missing > file chunk detected (Byte 0-1269141). Replacing with all-zero chunk. > > I'm now wondering what this means for the next backup I'm making. I > still have that file at the source, so when I do a borg create will it > back up the file fresh or will it think it already has it and dedup it > including the zero block? If the missing chunk is still in the source data and you run a backup (and thus recreate the missing chunk), the new archive will be healthy. To also get the old archives (that are not healthy and have the all-zero replacement chunk) into a healthy state, you need to run borg check --repair again after that backup. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dave at gasaway.org Tue Jul 17 17:53:15 2018 From: dave at gasaway.org (David Gasaway) Date: Tue, 17 Jul 2018 14:53:15 -0700 Subject: [Borgbackup] Recovery from missing segments In-Reply-To: <5833d951-9ebf-34f5-96eb-03bc5c8fce76@waldmann-edv.de> References: <4251872c-73d3-895e-c02d-61289d4d8179@waldmann-edv.de> <5833d951-9ebf-34f5-96eb-03bc5c8fce76@waldmann-edv.de> Message-ID: On Tue, Jul 17, 2018 at 2:09 PM, Thomas Waldmann wrote: > Did you use --progress? Your borg version (on client, on server)? Sorry! I missed that one. >>> Hypothetical mode would simply >>> check the filesystem for the expected segment files, and if not found, >>> replace the associated chunks in the index with zero chunks so the >>> chunks/segments get recreated at the next backup (assuming source data >>> is still present). >> >> On second thought, I may also need it to do something like check the >> expected length of segment files in case a new chunk was added to an >> existing segment, but the segment was "rolled back". The more I think >> about this though, what I'd probably want is a way to invalidate any >> chunk created after a certain date/time. Not sure if this is >> possible. > > If borg check --repair determines that a chunk is not there any more, it > puts a same-length replacement chunk there, but it still remembers the > healthy chunkid (content hash). See my other post from today about how > it can later use that for healing. Yes, but it only does that with a repository check that reads the content of all the segment files. I'm still hypothesizing out loud a new mode that would not. Thanks. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org From tve at voneicken.com Tue Jul 17 18:18:05 2018 From: tve at voneicken.com (Thorsten von Eicken) Date: Tue, 17 Jul 2018 22:18:05 +0000 Subject: [Borgbackup] what are the implications of this repair? In-Reply-To: References: <01000164a5fd51fa-5bbbaffa-87fa-4fab-bcf7-8dc2ddae8bc5-000000@email.amazonses.com> Message-ID: <01000164aa526af0-cda9e9c5-005d-49aa-ba27-1227a2c154de-000000@email.amazonses.com> On 07/17/2018 02:03 PM, Thomas Waldmann wrote: > On 17.07.2018 04:06, Thorsten von Eicken wrote: >> I've had filesystem corruption on a backup disk and I've run borg check >> --repair with the result that I got some messages like the following one: >> >> big/MusicFLAC/_Thorsten/lesson2/02 Unknown - Track 02.mp3: New missing >> file chunk detected (Byte 0-1269141). Replacing with all-zero chunk. >> >> I'm now wondering what this means for the next backup I'm making. I >> still have that file at the source, so when I do a borg create will it >> back up the file fresh or will it think it already has it and dedup it >> including the zero block? > If the missing chunk is still in the source data and you run a backup > (and thus recreate the missing chunk), the new archive will be healthy. > > To also get the old archives (that are not healthy and have the all-zero > replacement chunk) into a healthy state, you need to run borg check > --repair again after that backup. Thanks for the info, that's pretty cool, nicely designed! In another repo I wasn't as lucky, the index files got corrupted, and instead of ~20 archives only one remained. The messages I got from check --repair are like: Analyzing archive photos-2018-05-31T03:57-07:00 (7/17) Archive metadata block is missing! After the check-repair this is one of the repos that is no longer listed by "borg list". Do I gather correctly that there is no way to reconstruct those indexes? And at what point does all the data that is no longer referenced get garbage collected, I presume that happens the next time I run borg prune? Thanks!!! Thorsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Wed Jul 18 05:49:45 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 18 Jul 2018 11:49:45 +0200 Subject: [Borgbackup] what are the implications of this repair? In-Reply-To: <01000164aa526af0-cda9e9c5-005d-49aa-ba27-1227a2c154de-000000@email.amazonses.com> References: <01000164a5fd51fa-5bbbaffa-87fa-4fab-bcf7-8dc2ddae8bc5-000000@email.amazonses.com> <01000164aa526af0-cda9e9c5-005d-49aa-ba27-1227a2c154de-000000@email.amazonses.com> Message-ID: > In another repo I wasn't as lucky, the index files got corrupted, and > instead of ~20 archives only one remained. The messages I got from check > --repair are like: > > Analyzing archive photos-2018-05-31T03:57-07:00 (7/17) > Archive metadata block is missing! That likely means you missed or corrupted files in repo/data/... Did you check the hardware (see our docs about IntegrityErrors). > After the check-repair this is one of the repos that is no longer listed > by "borg list". Do I gather correctly that there is no way to > reconstruct those indexes? It's archive metadata (list of all blocks that store information about backed up filenames, file metadata, lists of file content block references), not just an index/cache. > And at what point does all the data that is > no longer referenced get garbage collected, I presume that happens the > next time I run borg prune? No, iirc it should be in check --repair already. prune is just a time-based delete. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From lists at marvingaube.de Wed Jul 18 08:28:02 2018 From: lists at marvingaube.de (Marvin Gaube) Date: Wed, 18 Jul 2018 14:28:02 +0200 Subject: [Borgbackup] Problems with big files Message-ID: <892961eb-28ad-8497-6a04-6ea1be29ac53@marvingaube.de> Hello, i started using borgbackup to do backups from my homeserver to a backup server elsewhere (Standard SSH). Currently I'm stuck in the first run with a very strange behavior: I have a big file, around 100 GB. Unfortunately, the connection is interrupted twice a day. Between the interruptions, borg usually gets around 20-40 GB transfered. This runs for two weeks now, I'm never getting further than this big file - and it's not touched at all. Theoretically, it would have taken 2-3 days to transfer that file. My idea is that for reasons i didn't know borg restarts transferring this file completely, instead of using the chunks already transfered. But, as far as i understood the documentation, i should have used the chunks. Is their any idea how i could solve or at least further debug the problem? borg is on version 1.0.9 from debian Repo on both machines. Command is: REPOSITORY="ssh://user at host/backupdir" borg create -v --stats --progress --checkpoint-interval 300????????????????????? \ ??? $REPOSITORY::'{now:%Y-%m-%d_%H:%M}'???????????????? \ ??? /path/to/huge/directory I set checkpoint-interval to 300 hoping it would solve the problem, but it didn't. Thanks! Marvin Gaube -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From public at enkore.de Wed Jul 18 08:36:59 2018 From: public at enkore.de (Marian Beermann) Date: Wed, 18 Jul 2018 14:36:59 +0200 Subject: [Borgbackup] Problems with big files In-Reply-To: <892961eb-28ad-8497-6a04-6ea1be29ac53@marvingaube.de> References: <892961eb-28ad-8497-6a04-6ea1be29ac53@marvingaube.de> Message-ID: On 18.07.2018 14:28, Marvin Gaube wrote: > Hello, > i started using borgbackup to do backups from my homeserver to a backup > server elsewhere (Standard SSH). Currently I'm stuck in the first run > with a very strange behavior: > I have a big file, around 100 GB. Unfortunately, the connection is > interrupted twice a day. Between the interruptions, borg usually gets > around 20-40 GB transfered. This runs for two weeks now, I'm never > getting further than this big file - and it's not touched at all. > Theoretically, it would have taken 2-3 days to transfer that file. > > My idea is that for reasons i didn't know borg restarts transferring > this file completely, instead of using the chunks already transfered. > But, as far as i understood the documentation, i should have used the > chunks. > > Is their any idea how i could solve or at least further debug the problem? > > borg is on version 1.0.9 from debian Repo on both machines.> > Command is: > REPOSITORY="ssh://user at host/backupdir" > borg create -v --stats --progress --checkpoint-interval > 300????????????????????? \ > ??? $REPOSITORY::'{now:%Y-%m-%d_%H:%M}'???????????????? \ > ??? /path/to/huge/directory > > I set checkpoint-interval to 300 hoping it would solve the problem, but > it didn't. borg 1.0 does not do checkpoints in files. > Thanks! > Marvin Gaube > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From tw at waldmann-edv.de Wed Jul 18 09:41:27 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 18 Jul 2018 15:41:27 +0200 Subject: [Borgbackup] Problems with big files In-Reply-To: References: <892961eb-28ad-8497-6a04-6ea1be29ac53@marvingaube.de> Message-ID: <68a73e22-0444-cb65-76fd-7742c52a692d@waldmann-edv.de> >> I set checkpoint-interval to 300 hoping it would solve the problem, but >> it didn't. > > borg 1.0 does not do checkpoints in files. ... and borg 1.1.x does (just in case that was unclear). Note: if you read docs online (on readthedocs), always check the version number (you can choose it via the version selector there to read the correct docs for your borg version). -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From lists at marvingaube.de Wed Jul 18 10:07:50 2018 From: lists at marvingaube.de (Marvin Gaube) Date: Wed, 18 Jul 2018 16:07:50 +0200 Subject: [Borgbackup] Problems with big files In-Reply-To: <68a73e22-0444-cb65-76fd-7742c52a692d@waldmann-edv.de> References: <892961eb-28ad-8497-6a04-6ea1be29ac53@marvingaube.de> <68a73e22-0444-cb65-76fd-7742c52a692d@waldmann-edv.de> Message-ID: Hello, thanks, got it working ;) Added a PR to include this into the documentation, i didn't find that fact before: https://github.com/borgbackup/borg/pull/3987 Thank you Marvin Am 18.07.2018 um 15:41 schrieb Thomas Waldmann: > >>> I set checkpoint-interval to 300 hoping it would solve the problem, but >>> it didn't. >> >> borg 1.0 does not do checkpoints in files. > > ... and borg 1.1.x does (just in case that was unclear). > > Note: if you read docs online (on readthedocs), always check the > version number (you can choose it via the version selector there to > read the correct docs for your borg version). > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From simplerezo at gmail.com Thu Jul 26 14:39:24 2018 From: simplerezo at gmail.com (Support SimpleRezo) Date: Thu, 26 Jul 2018 20:39:24 +0200 Subject: [Borgbackup] Export without keyfile (feature request ;)) Message-ID: Hi list! I will try to explain what I want to achieve :) I'm backuping datas from server (S) with borg on a remote server (R) and since I don't want that datas can be readed on the remote, I'm using "keyfile" encryption. But, because I want some kind of "double-safety", I want to export an encrypted image on a "cold storage", but directly from the borg repository because (S) don't have a very high speed or stable connexion. So, it would be awesome if: - directly on (R) I was able to "extract" a tar encrypted without the need of keyfile (don't know if it's possible by design) ; - or, probably simpler to implement, sending from (S) a extract-tar request but the tar would be generated directly on (R) (with datas encrypted) without the need to data transit from (R)... I was thinking about doing it by scripting (from S through a ssh channel, with extract-tar piped into openssl), but this will necessary means that (R) will have keyfile temporaly in memory (and reguires (R) to "re-encrypt")... Regards -- Clement SimpleRezo -------------- next part -------------- An HTML attachment was scrubbed... URL: From alchemek at gmail.com Fri Jul 27 21:26:51 2018 From: alchemek at gmail.com (Timothy Beryl Grahek) Date: Fri, 27 Jul 2018 18:26:51 -0700 Subject: [Borgbackup] Borg 1.1.6: The consequences of interrupting a 'borg prune' command Message-ID: Hi, What would happen if I did a 'Ctrl - C' command on a 'borg prune'? Can I resume the prune action later? Am I prevented from backing up my system until I resume it? What are the consequences in general? Thank you, Timothy Beryl Grahek From tw at waldmann-edv.de Tue Jul 31 13:11:38 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 31 Jul 2018 18:11:38 +0100 Subject: [Borgbackup] Export without keyfile (feature request ;)) In-Reply-To: References: Message-ID: <82945b03-6d1b-86c9-b8f1-e8e5c910e9ae@waldmann-edv.de> > I'm backuping datas from server (S) with borg on a remote server (R) and > since I don't want that datas can be readed on the remote, I'm using > "keyfile" encryption. It can not be read on the server when using "repokey" either - for both cases one needs to also have the passphrase that decrypts the encrypted key. > But, because I want some kind of "double-safety", I want to export an > encrypted image on a "cold storage", but directly from the borg > repository because (S) don't have a very high speed or stable connexion. You can do 2 things rather easily: - use some tool (rsync / rclone / ...) to copy and update the whole encrypted repo to another server. be aware of error propagation (see faq). - just do 2 backups from S to Ra and Rb. for first backup it might take a while, but if daily updates on S are not that much, it might be doable even if your connection is slow. use excludes to get rid of big, but unimportant stuff. > So, it would be awesome if: > ? - directly on (R) I was able to "extract" a tar encrypted without the > need of keyfile (don't know if it's possible by design) ;? That's not possible due everything being encrypted (including all metadata). > ? - or, probably simpler to implement, sending from (S) a extract-tar > request but the tar would be generated directly on (R) (with datas > encrypted) without the need to data transit from (R)... That's not possible either as all the crypto is done client-side. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Tue Jul 31 13:28:01 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 31 Jul 2018 18:28:01 +0100 Subject: [Borgbackup] Borg 1.1.6: The consequences of interrupting a 'borg prune' command In-Reply-To: References: Message-ID: <237b887d-d41e-7e96-4ee4-eabba7b5da34@waldmann-edv.de> > What would happen if I did a 'Ctrl - C' command on a 'borg prune'? It depends a bit on when you do that. For all currently released borg versions 1.0.x and 1.1.x, prune (and all other repo-writing commands) are 2 steps: 1. do things (e.g. delete archives accordings to pruning policy), commit. 2. compact segments (so space is freed when objects from non-compact segment files are moved to new compact segment files), final commit. additionally to the final commit, compact_segments will also do intermediate commits frequently. if you ctrl-c before first commit, it is like you did not run the command. the next time you run borg, it will remove all uncommitted data. if you ctrl-c before second final commit, prune will have already deleted the archives, but has not yet freed all space. > Can I resume the prune action later? borg never really resumes. but as it avoids doing same stuff twice (e.g. it does not store data blocks it already has), it often feels like resuming. so, you'ld just start prune again and it should work. > Am I prevented from backing up my > system until I resume it? What are the consequences in general? It should work no matter what you do. borg >= 1.2 will work a bit different: compact segments will never be done implicitly, only when you call the "borg compact" command. this can be done less frequently (as long as you have space) and also it can be invoked from the repo server (does not need a key). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tbutz at optitool.de Thu Aug 2 07:19:32 2018 From: tbutz at optitool.de (Thomas Butz) Date: Thu, 2 Aug 2018 13:19:32 +0200 (CEST) Subject: [Borgbackup] Not able to saturate ethernet In-Reply-To: <268872071.61.1533207817853.JavaMail.tbutz@BART> Message-ID: <1565465683.67.1533208767293.JavaMail.tbutz@BART> Currently i'm not able to saturate my 1Gbps ethernet connection if i'm running a backup job. The disk IO of the borg process on the client machine seems to cap at about 40MB/s but its CPU utilization is about 65%. I've already run a test with a 4GB file generated from /dev/urandom. The transfer speed via rsync+ssh from client to server was about 95MB/s which is roughly the upper limit of the link. The payload of my backup job are rather big VM images(20-200GB). According to someone on the IRC channel the culprit could be the hashing performance. Is there something left i could try to improve the backup speed? Server: borg 1.1.5 AMD FX(tm)-4170 with AES support borg init --encryption=repokey --append-only vmbackup Client: borg 1.1.6 Intel Xeon E5-2620 v4 @ 2.10GHz with AES+SHA support borg create \\ --verbose \\ --filter AME \\ --list \\ --stats \\ --show-rc \\ --compression lz4 \\ --exclude-caches \\ \\ ::'{hostname}-{now:%Y-%m-%d_%H:%M}' \\ /vm_data -- Best regards Thomas Butz From tw at waldmann-edv.de Sat Aug 11 15:53:07 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 11 Aug 2018 21:53:07 +0200 Subject: [Borgbackup] borgbackup 1.1.7 released! Message-ID: <5acc7725-f3d6-bfba-de63-87bc54d1cac8@waldmann-edv.de> Some bugfixes and support for Python 3.7. https://github.com/borgbackup/borg/releases/tag/1.1.7 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dirk at deimeke.net Mon Aug 13 06:44:41 2018 From: dirk at deimeke.net (Dirk Deimeke) Date: Mon, 13 Aug 2018 12:44:41 +0200 Subject: [Borgbackup] Change repository from unencrypted to encrypted Message-ID: <6c52a41336d993a8dfd8aa9a81a27e7c@deimeke.net> Hi! I like to migrate my backups from a server where I have root access to a storage service. Would it be possible to migrate my repository to an encrypted one? If not, is it planned to offer that future-wise? Cheers Dirk -- https://d5e.org/ From tw at waldmann-edv.de Mon Aug 13 07:48:52 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 13 Aug 2018 13:48:52 +0200 Subject: [Borgbackup] Change repository from unencrypted to encrypted In-Reply-To: <6c52a41336d993a8dfd8aa9a81a27e7c@deimeke.net> References: <6c52a41336d993a8dfd8aa9a81a27e7c@deimeke.net> Message-ID: <2d39a9a1-a547-a17e-4a90-88c172e4ec52@waldmann-edv.de> > I like to migrate my backups from a server where I have root access to a > storage service. There is no way to transfer archives to another repo yet. You can just copy the repo as a whole (using e.g. rsync). But if your repo is not encrypted, that's maybe not an option either for confidentiality reasons (if the target machine is not under your control). > Would it be possible to migrate my repository to an encrypted one? There is currently no way to switch to encrypted after borg init. So, the only current options for you are: - just start a new encrypted repo and use it for new backups - extract and re-backup your old backups (depending on how many archives / how much data you have, this might be time consuming) > If not, is it planned to offer that future-wise? Well, the need for such features isn't new, but they are a lot of work to implement. Also, they are only useful for one-time usage in specific scenarios, while other work being done is needed for the frequent/generic use cases. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dirk at deimeke.net Mon Aug 13 07:58:03 2018 From: dirk at deimeke.net (Dirk Deimeke) Date: Mon, 13 Aug 2018 13:58:03 +0200 Subject: [Borgbackup] Change repository from unencrypted to encrypted In-Reply-To: <2d39a9a1-a547-a17e-4a90-88c172e4ec52@waldmann-edv.de> References: <6c52a41336d993a8dfd8aa9a81a27e7c@deimeke.net> <2d39a9a1-a547-a17e-4a90-88c172e4ec52@waldmann-edv.de> Message-ID: <6b11c73892f6c976d83c9e6eb400c8c6@deimeke.net> On 2018-08-13 13:48, Thomas Waldmann wrote: Hi Thomas, >> I like to migrate my backups from a server where I have root access to >> a >> storage service. > There is no way to transfer archives to another repo yet. this is what I thought. > You can just copy the repo as a whole (using e.g. rsync). > But if your repo is not encrypted, that's maybe not an option either > for > confidentiality reasons (if the target machine is not under your > control). Exactly. > So, the only current options for you are: > - just start a new encrypted repo and use it for new backups > - extract and re-backup your old backups (depending on how many > archives > / how much data you have, this might be time consuming) Option 1 ;-) + Migrate monthly and yearly backups. + Transfer the old repo to my local NAS and prune it day for day until all daily backups are cleaned. > Well, the need for such features isn't new, but they are a lot of work > to implement. > Also, they are only useful for one-time usage in specific scenarios, > while other work being done is needed for the frequent/generic use > cases. Absolutely agreed, thank you Thomas! Cheers Dirk -- https://d5e.org/ From w at swtk.info Tue Aug 28 10:13:06 2018 From: w at swtk.info (Wojtek Swiatek) Date: Tue, 28 Aug 2018 16:13:06 +0200 Subject: [Borgbackup] What do sizes mean when pruning? Message-ID: Hello everyone, I started to use borg as my backup platform some time ago, running a backup every 4 hours. Each backup name is time stamped with the time the backup started. I therefore ended up with ~60 backups and decided it is time to prune them (and then prune on a regular basis). I read the documentation about the command and went for a dry run first root at srv ~# borg prune -v --list --dry-run --keep-daily=7 --keep-weekly=4 --stats /services/backup/borg/ Keeping archive: srv-2018-08-28T12:00:01+02:00 Tue, 2018-08-28 12:00:02 [753f5c42bb554f1a3a7614a860f31d9b6e80ef3151635d6842dbd30eecdf58e0] Would prune: srv-2018-08-28T08:00:01+02:00 Tue, 2018-08-28 08:00:02 [42399cbafe9ad4a8f752392bde648459529bc9811b91a97e4953525574e16ac7] (...) Would prune: srv-2018-08-27T00:00:01+02:00 Mon, 2018-08-27 00:00:03 [961e092dff8754806c7b9bbb7d5327c65ca25cd1d1a22f1b09bc78ea24c87310] Keeping archive: srv-2018-08-26T20:00:01+02:00 Sun, 2018-08-26 20:00:02 [d96c3b2590afdd088c051a20a604a52355ecd4db28abad6badf13e43666fea87] Would prune: srv-2018-08-26T16:00:01+02:00 Sun, 2018-08-26 16:00:03 [30920cb17a5109f395145f6b3555111bdd9734a9501ab40661dcf974e75fa4c8] Would prune: srv-2018-08-26T12:00:00+02:00 Sun, 2018-08-26 12:00:01 [abf9a1781c01a6a35a214f68e67331a44379cea5d7e8f136e90ad8033b432f7a] (...) Would prune: srv-2018-08-17T20:22:26+02:00 Fri, 2018-08-17 20:22:27 [911446a4de9adff4b6b50a8340f6408e2ad2277d677f8f88f994b1b4b08ef8d9] Would prune: srv Fri, 2018-08-17 16:00:04 [e59c0bb486ba06677e20775783e9efd266f99cb1cb29d820d5b95d7225468077] ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size Deleted data: 0 B 0 B 0 B All archives: 24.01 TB 21.32 TB 340.27 GB Unique chunks Total chunks Chunk index: 568754 73359328 ------------------------------------------------------------------------------ The numbers above are weird as the whole backed up drive is 500 GB but the docs mention something about sizes so my understanding is that: - there is 340 GB of files backed up - but since there are several backups, this corresponds (backup after backup) to a total of "virtual" 21 TB of files - and it does not matter because deduplication. In other words, a 1 GB file is backuped 10 times so it looks like the backup takes 100 GB, but since there is deduplication, it is only 1 GB. I then ran the real prunning: root at srv ~# borg prune -v --list --keep-daily=7 --keep-weekly=4 --stats /services/backup/borg/ Keeping archive: srv-2018-08-28T12:00:01+02:00 Tue, 2018-08-28 12:00:02 [753f5c42bb554f1a3a7614a860f31d9b6e80ef3151635d6842dbd30eecdf58e0] Pruning archive: srv-2018-08-28T08:00:01+02:00 Tue, 2018-08-28 08:00:02 [42399cbafe9ad4a8f752392bde648459529bc9811b91a97e4953525574e16ac7] (1/58) (...) Pruning archive: srv Fri, 2018-08-17 16:00:04 [e59c0bb486ba06677e20775783e9efd266f99cb1cb29d820d5b95d7225468077] (58/58) ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size Deleted data: -21.04 TB -18.70 TB -16.35 GB All archives: 2.97 TB 2.62 TB 323.92 GB Unique chunks Total chunks Chunk index: 505291 9340546 ------------------------------------------------------------------------------ I now see that there is a virtual 21 TB of data gone, the actual size is more or less the same as before. *My question is: what is the value of the "original size" information? It is for me an indication of how much space I would have needed if there was no deduplication but beside that I do not really see the usage I can make of it. is there anything more behind it?* Cheers, Wojtek -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Aug 28 10:22:18 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 28 Aug 2018 16:22:18 +0200 Subject: [Borgbackup] What do sizes mean when pruning? In-Reply-To: References: Message-ID: > *My question is: what is the value of the "original size" information? > It is for me an indication of how much space I would have needed if > there was no deduplication but beside that I do not really see the usage > I can make of it. Yes, that is it. It's somehow the space used for: tar (hypothetical) tgz (hypothetical) borg (real) or uncompressed / not deduped compressed / not deduped compressed and deduped > is there anything more behind it?* No. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From dave at gasaway.org Tue Aug 28 13:11:39 2018 From: dave at gasaway.org (David Gasaway) Date: Tue, 28 Aug 2018 10:11:39 -0700 Subject: [Borgbackup] What do sizes mean when pruning? In-Reply-To: References: Message-ID: On Tue, Aug 28, 2018 at 7:13 AM, Wojtek Swiatek wrote: > > > *My question is: what is the value of the "original size" information? It > is for me an indication of how much space I would have needed if there was > no deduplication but beside that I do not really see the usage I can make > of it. is there anything more behind it?* > In my experience, as someone who uses multiple roots and include/exclude patterns, "Original size" and "Compressed size" appear to be the complete size of the root folders, including files that are excluded by the patterns. In other words, even if I had deduplication and compression off, I would not need the space reported in "Original size". Not sure whether this applies to you. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From adi5 at gmx.at Thu Aug 30 11:22:32 2018 From: adi5 at gmx.at (Adi Marvillo) Date: Thu, 30 Aug 2018 17:22:32 +0200 Subject: [Borgbackup] Prune and/or Compact finished with an error Message-ID: Hello everybody, I am using the script from https://borgbackup.readthedocs.io/en/latest/quickstart.html#automating-backups um to automate my backups and always have this errormessage: Keeping archive: vps48XXXXXX-2018-08-30 16:37:51.687246 Thu, 2018-08-30 16:37:51 terminating with success status, rc 0 usage: borg [-h] [-V] ... borg: error: argument : invalid choice: 'compact' (choose from 'serve', 'init', 'check', 'change-passphrase', 'key', 'migrate-to-repokey', 'create', 'extract', 'rename', 'delete', 'list', 'mount', 'umount', 'info', 'break-lock', 'prune', 'upgrade', 'help', 'debug', 'debug-info', 'debug-dump-archive-items', 'debug-dump-repo-objs', 'debug-get-obj', 'debug-put-obj', 'debug-delete-obj', 'debug-refcount-obj') Thu Aug 30 17:09:55 CEST 2018 Backup, Prune and/or Compact finished with an error What is my principal problem with "Backup, Prune and/or Compact finished with an errorBackup, Prune and/or Compact finished with an error"? I was hpoing to get more information when writing --debug into the script, but no archive is defined "borg create: error: argument ARCHIVE: "DEBUG": No archive specified" - how to define the logging archive? Thx adi From tw at waldmann-edv.de Thu Aug 30 11:33:45 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 30 Aug 2018 17:33:45 +0200 Subject: [Borgbackup] Prune and/or Compact finished with an error In-Reply-To: References: Message-ID: <309464f5-6cef-1ca5-f74f-b29473a6b90c@waldmann-edv.de> > I am using the script from > https://borgbackup.readthedocs.io/en/latest/quickstart.html#automating-backups "latest" means the development version from master branch. As you likely rather have some 1.1.x version, use that: https://borgbackup.readthedocs.io/en/stable/ There is a version selector widget at the lower right. From adi5 at gmx.at Fri Aug 31 04:49:25 2018 From: adi5 at gmx.at (Adi Marvillo) Date: Fri, 31 Aug 2018 10:49:25 +0200 Subject: [Borgbackup] Prune and/or Compact finished with an error In-Reply-To: <309464f5-6cef-1ca5-f74f-b29473a6b90c@waldmann-edv.de> References: <309464f5-6cef-1ca5-f74f-b29473a6b90c@waldmann-edv.de> Message-ID: <964f91ab-939d-ace0-f54e-9f5a0c48364b@gmx.at> Hi Thomas, I have Version 1.0.9-1 installed - out of the debian repos.... Do you think the Prune error comes from this particular version? Am 2018-08-30 um 17:33 schrieb Thomas Waldmann: >> I am using the script from >> https://borgbackup.readthedocs.io/en/latest/quickstart.html#automating-backups > "latest" means the development version from master branch. > > As you likely rather have some 1.1.x version, use that: > > https://borgbackup.readthedocs.io/en/stable/ > > There is a version selector widget at the lower right. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From adi5 at gmx.at Fri Aug 31 07:29:00 2018 From: adi5 at gmx.at (Adi Marvillo) Date: Fri, 31 Aug 2018 13:29:00 +0200 Subject: [Borgbackup] Prune and/or Compact finished with an error In-Reply-To: <964f91ab-939d-ace0-f54e-9f5a0c48364b@gmx.at> References: <309464f5-6cef-1ca5-f74f-b29473a6b90c@waldmann-edv.de> <964f91ab-939d-ace0-f54e-9f5a0c48364b@gmx.at> Message-ID: Am 2018-08-31 um 10:49 schrieb Adi Marvillo: > Hi Thomas, I have Version 1.0.9-1 installed - out of the debian > repos.... Do you think the Prune error comes from this particular version? > > > Am 2018-08-30 um 17:33 schrieb Thomas Waldmann: >>> I am using the script from >>> https://borgbackup.readthedocs.io/en/latest/quickstart.html#automating-backups >> "latest" means the development version from master branch. >> >> As you likely rather have some 1.1.x version, use that: >> >> https://borgbackup.readthedocs.io/en/stable/ >> >> There is a version selector widget at the lower right. >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup just wanted to let you know that the error message was a result of using the wrong script for the Version of my borgbackup..... Apparently there were changes in the prune command... Anybody knows how to enable logging, or writing how to write the successful backup logs into a file? Thx ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri Aug 31 12:10:09 2018 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 31 Aug 2018 18:10:09 +0200 Subject: [Borgbackup] Prune and/or Compact finished with an error In-Reply-To: References: <309464f5-6cef-1ca5-f74f-b29473a6b90c@waldmann-edv.de> <964f91ab-939d-ace0-f54e-9f5a0c48364b@gmx.at> Message-ID: <676748e6-906a-98bf-6e5d-e262b4f0ece1@waldmann-edv.de> >> Hi Thomas, I have Version 1.0.9-1 installed - out of the debian >> repos.... Do you think the Prune error comes from this particular version? Yes, because you are reading the docs of a future release, not of 1.0.9. > Anybody knows how to enable logging, or writing how to write the > successful backup logs into a file? Yes, just use standard I/O redirection: borg ... >borg.log 2>&1 The docs also describe more advanced logging configurations, but it also takes more effort to do them. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From abelschreck3 at freenet.de Sat Sep 15 11:27:04 2018 From: abelschreck3 at freenet.de (Christian) Date: Sat, 15 Sep 2018 17:27:04 +0200 Subject: [Borgbackup] question regarding restoration of latest backup Message-ID: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> Hi altogether, I?ve got a question regarding restoration of backups. I looked at https://borgbackup.readthedocs.io/en/stable/quickstart.html for information. But somehow I couldn?t find what I was looking for. Out of sheer interest: Suppose I created a backup with "borg create /path/to/repo::Monday ~/src ~/Documents" . And the next day: "borg create --stats /path/to/repo::Tuesday ~/src ~/Documents" . It says: "This backup will be a lot quicker and a lot smaller since only new never before seen data is stored." O.K., I understand. But what if I wanted to restore the latest backup (the "Tuesday" one)? Since this one only holds the new data I suppose I?d have to keep the "Monday" backup, too. But what would be the correct command? Would it be? "borg extract /path/to/repo::Tuesday" ? After all I want to retore the latest backup. Would "borg extract /path/to/repo::Tuesday"? take care of the "Monday" backup automatically since the bulk of the data is stored there? Thanks a lot for your help. Greetings Rosika From public at enkore.de Sat Sep 15 11:37:58 2018 From: public at enkore.de (Marian Beermann) Date: Sat, 15 Sep 2018 17:37:58 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> Message-ID: <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> You don't have to think about any of that. You can create an archive. You can extract it. Deleting other archives makes no difference. -Marian PS: That's the trade-off made by Borg and similar software. Requires more CPU and memory resources, but gets rid of inter-backup dependencies. On 9/15/18 5:27 PM, Christian wrote: > Hi altogether, > > I?ve got a question regarding restoration of backups. > > I looked at https://borgbackup.readthedocs.io/en/stable/quickstart.html > for information. But somehow I couldn?t find what I was looking for. > > Out of sheer interest: > > Suppose I created a backup with "borg create /path/to/repo::Monday ~/src > ~/Documents" . > And the next day: "borg create --stats /path/to/repo::Tuesday ~/src > ~/Documents" . > > It says: "This backup will be a lot quicker and a lot smaller since only > new never before seen data is stored." > > O.K., I understand. But what if I wanted to restore the latest backup > (the "Tuesday" one)? > Since this one only holds the new data I suppose I?d have to keep the > "Monday" backup, too. > > But what would be the correct command? Would it be? "borg extract > /path/to/repo::Tuesday" ? > After all I want to retore the latest backup. > Would "borg extract /path/to/repo::Tuesday"? take care of the "Monday" > backup automatically since the bulk of the data is stored there? > > Thanks a lot for your help. > > Greetings > Rosika > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From abelschreck3 at freenet.de Sat Sep 15 11:48:52 2018 From: abelschreck3 at freenet.de (Christian) Date: Sat, 15 Sep 2018 17:48:52 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> Message-ID: <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> Hi Marian, thank you so much for your fast answer. Yet it?s not quite clear to me how it works. Referring to my example: Do you say that I can delete the Monday-backup? That would imply that all the data of the Monday-backup is present in the Tuesday-backup as well. I was of the opinion that the Tuesday-backup only holds the new data. Or did I get it wrong? Greetings Rosika Am 15.09.2018 um 17:37 schrieb Marian Beermann: > You don't have to think about any of that. > > You can create an archive. > You can extract it. > Deleting other archives makes no difference. > > -Marian > > PS: That's the trade-off made by Borg and similar software. Requires > more CPU and memory resources, but gets rid of inter-backup dependencies. > > On 9/15/18 5:27 PM, Christian wrote: >> Hi altogether, >> >> I?ve got a question regarding restoration of backups. >> >> I looked at https://borgbackup.readthedocs.io/en/stable/quickstart.html >> for information. But somehow I couldn?t find what I was looking for. >> >> Out of sheer interest: >> >> Suppose I created a backup with "borg create /path/to/repo::Monday ~/src >> ~/Documents" . >> And the next day: "borg create --stats /path/to/repo::Tuesday ~/src >> ~/Documents" . >> >> It says: "This backup will be a lot quicker and a lot smaller since only >> new never before seen data is stored." >> >> O.K., I understand. But what if I wanted to restore the latest backup >> (the "Tuesday" one)? >> Since this one only holds the new data I suppose I?d have to keep the >> "Monday" backup, too. >> >> But what would be the correct command? Would it be? "borg extract >> /path/to/repo::Tuesday" ? >> After all I want to retore the latest backup. >> Would "borg extract /path/to/repo::Tuesday"? take care of the "Monday" >> backup automatically since the bulk of the data is stored there? >> >> Thanks a lot for your help. >> >> Greetings >> Rosika >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > From public at enkore.de Sat Sep 15 11:57:51 2018 From: public at enkore.de (Marian Beermann) Date: Sat, 15 Sep 2018 17:57:51 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> Message-ID: Yes, that's indeed the case. Each archive can be independently manipulated from every other archive. Borg recognizes duplicate data (e.g. from Monday and Tuesday) backups and stores it only once; deleting Monday will not delete any data still used by Tuesday. -Marian On 9/15/18 5:48 PM, Christian wrote: > Hi Marian, > > > thank you so much for your fast answer. > > Yet it?s not quite clear to me how it works. > > Referring to my example: Do you say that I can delete the Monday-backup? > That would imply that all the data of the Monday-backup is present in > the Tuesday-backup as well. > > I was of the opinion that the Tuesday-backup only holds the new data. Or > did I get it wrong? > > Greetings > Rosika > > > > Am 15.09.2018 um 17:37 schrieb Marian Beermann: >> You don't have to think about any of that. >> >> You can create an archive. >> You can extract it. >> Deleting other archives makes no difference. >> >> -Marian >> >> PS: That's the trade-off made by Borg and similar software. Requires >> more CPU and memory resources, but gets rid of inter-backup dependencies. >> >> On 9/15/18 5:27 PM, Christian wrote: >>> Hi altogether, >>> >>> I?ve got a question regarding restoration of backups. >>> >>> I looked at https://borgbackup.readthedocs.io/en/stable/quickstart.html >>> for information. But somehow I couldn?t find what I was looking for. >>> >>> Out of sheer interest: >>> >>> Suppose I created a backup with "borg create /path/to/repo::Monday ~/src >>> ~/Documents" . >>> And the next day: "borg create --stats /path/to/repo::Tuesday ~/src >>> ~/Documents" . >>> >>> It says: "This backup will be a lot quicker and a lot smaller since only >>> new never before seen data is stored." >>> >>> O.K., I understand. But what if I wanted to restore the latest backup >>> (the "Tuesday" one)? >>> Since this one only holds the new data I suppose I?d have to keep the >>> "Monday" backup, too. >>> >>> But what would be the correct command? Would it be? "borg extract >>> /path/to/repo::Tuesday" ? >>> After all I want to retore the latest backup. >>> Would "borg extract /path/to/repo::Tuesday"? take care of the "Monday" >>> backup automatically since the bulk of the data is stored there? >>> >>> Thanks a lot for your help. >>> >>> Greetings >>> Rosika >>> >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup >>> >> > From abelschreck3 at freenet.de Sat Sep 15 12:31:53 2018 From: abelschreck3 at freenet.de (Christian) Date: Sat, 15 Sep 2018 18:31:53 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> Message-ID: <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> Hi Marian, thanks again for your answer. Now I get it. That?s really fantastic. I just tried it out by adding a new file to the original folder, then backing it up and deleting the first backup. And indeed the second backup holds all data including the newly created one. borgbackup is really a wonderful programme. Now that I know how it works I like it even better. Thanks again for your fast help. Have a nice weekend. Greetings. Rosika Am 15.09.2018 um 17:57 schrieb Marian Beermann: > Yes, that's indeed the case. > > Each archive can be independently manipulated from every other archive. > Borg recognizes duplicate data (e.g. from Monday and Tuesday) backups > and stores it only once; deleting Monday will not delete any data still > used by Tuesday. > > -Marian > > On 9/15/18 5:48 PM, Christian wrote: >> Hi Marian, >> >> >> thank you so much for your fast answer. >> >> Yet it?s not quite clear to me how it works. >> >> Referring to my example: Do you say that I can delete the Monday-backup? >> That would imply that all the data of the Monday-backup is present in >> the Tuesday-backup as well. >> >> I was of the opinion that the Tuesday-backup only holds the new data. Or >> did I get it wrong? >> >> Greetings >> Rosika >> >> >> >> Am 15.09.2018 um 17:37 schrieb Marian Beermann: >>> You don't have to think about any of that. >>> >>> You can create an archive. >>> You can extract it. >>> Deleting other archives makes no difference. >>> >>> -Marian >>> >>> PS: That's the trade-off made by Borg and similar software. Requires >>> more CPU and memory resources, but gets rid of inter-backup dependencies. >>> >>> On 9/15/18 5:27 PM, Christian wrote: >>>> Hi altogether, >>>> >>>> I?ve got a question regarding restoration of backups. >>>> >>>> I looked at https://borgbackup.readthedocs.io/en/stable/quickstart.html >>>> for information. But somehow I couldn?t find what I was looking for. >>>> >>>> Out of sheer interest: >>>> >>>> Suppose I created a backup with "borg create /path/to/repo::Monday ~/src >>>> ~/Documents" . >>>> And the next day: "borg create --stats /path/to/repo::Tuesday ~/src >>>> ~/Documents" . >>>> >>>> It says: "This backup will be a lot quicker and a lot smaller since only >>>> new never before seen data is stored." >>>> >>>> O.K., I understand. But what if I wanted to restore the latest backup >>>> (the "Tuesday" one)? >>>> Since this one only holds the new data I suppose I?d have to keep the >>>> "Monday" backup, too. >>>> >>>> But what would be the correct command? Would it be? "borg extract >>>> /path/to/repo::Tuesday" ? >>>> After all I want to retore the latest backup. >>>> Would "borg extract /path/to/repo::Tuesday"? take care of the "Monday" >>>> backup automatically since the bulk of the data is stored there? >>>> >>>> Thanks a lot for your help. >>>> >>>> Greetings >>>> Rosika >>>> >>>> _______________________________________________ >>>> Borgbackup mailing list >>>> Borgbackup at python.org >>>> https://mail.python.org/mailman/listinfo/borgbackup >>>> > From w at swtk.info Sat Sep 15 13:06:37 2018 From: w at swtk.info (Wojtek Swiatek) Date: Sat, 15 Sep 2018 19:06:37 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> Message-ID: To add to the answer, it is really worthwhile playing with pruning and adapting what to keep with the risk profile. As an example my backups are: root at srv ~# borg list /services/backup/borg/ srv-2018-08-19T20:00:01+02:00 Sun, 2018-08-19 20:00:02 [ca918a240129e3ae4cd803da8d6b3de14434dff19b0324a9ad879bb4f9033623] srv-2018-08-26T20:00:01+02:00 Sun, 2018-08-26 20:00:02 [d96c3b2590afdd088c051a20a604a52355ecd4db28abad6badf13e43666fea87] srv-17dc7ba45cfd4b24a0df38154173c32d Sun, 2018-09-02 21:00:03 [7f295076d550bca2859b19d62f4670a576bd13ca84aa04d05d803978fd05d3ad] srv-b88464c5db80445297375871a02add95 Thu, 2018-09-06 21:00:04 [896c6bde242a16c35e5894e134c188da8a6d515be3e905f347b20c67d28b51f4] srv-33529fc25c84499e98646de6a555fd2a Fri, 2018-09-07 21:00:04 [f331440738cba678f2ccb0c81a00731e442de24e1ba0d75ecb4f8fa7fb6de8fc] srv-0cec869710d74ad4b005148c9147a8e3 Sat, 2018-09-08 21:00:04 [d57e5d45addee30caef21e8a47827c788d2c3cb717b9f1a5c0e3c68598f9c55a] srv-05df149848c84f32bc3c135c567af7f4 Sun, 2018-09-09 21:00:04 [306a3fceb9588898e134461fd08ee912231fa05782865fad56b1ddc2d7c802a9] srv-e59d0df80d6d475a8152686ee6f002de Mon, 2018-09-10 21:00:04 [e2bbff2175f7d6fe16e53eab0894f3ce7ff4341448e26cb0226c2fb0e833d95e] srv-7cd6f3105fcd4d88849a4a31aac0ca1a Tue, 2018-09-11 21:00:11 [e42fcb84d022992be275a3dcd39c073142100c727dcc9a38c2b4ff5505004f95] srv-349a31f6044144dea02ce94c298c3341 Wed, 2018-09-12 18:00:05 [6609c5784e0278c06b518933ebcc6acfabfff1773060c395adcfcb62b5c69e6c] srv-494092c323e44b508492c2946c2a2c31 Thu, 2018-09-13 21:00:04 [73b788010bdbc197c29d170f86d745f057a4fcda263d957e048bb6d3dbc09a85] srv-58a00dacf6d54cbaa88a23650cd1bb25 Fri, 2018-09-14 00:00:04 [a6e5ff1d0d39941cbf7251bc8f4ba8c0b4a2f8b9801a543ad289a734475fc632] srv-52190f860bdb4f998dc3ef79a64afb72 Fri, 2018-09-14 03:00:04 [48b9e845815c2dd743eec34d16c24aaed85db54b9a9ca629a3e3fc5790f381a1] srv-0b84d425148e404ab59f15c0fdb016d6 Fri, 2018-09-14 06:00:04 [7bde1c378da76f286c1757533ceddf3277c5c3cf6ff9a6f986e4ed7746db5ca8] srv-2b60fb979b60462f93847713c244d0f0 Fri, 2018-09-14 09:00:04 [939d08f6aa6bd06117238730d0a5536e8a7cd82ebcf8023a8c9708f351ef41fd] srv-83e0f05e1a4d4926a0a7a1c6e5e2a318 Fri, 2018-09-14 12:00:03 [2513500d9d3d8b0350257315cb6babaa898437ffa89501190e6761dd245cdb1e] srv-4904f197c6a9411aa19cf85bf0c00e15 Fri, 2018-09-14 15:00:04 [09ba0e56e0fe87cbc1461cf0e026995a3b30d45b67420a8f641d81d015e27d33] srv-aa9a11d50f42416082866742a256d977 Fri, 2018-09-14 18:00:04 [2cbae32e80d85d5fb5bf37176e02ddc1e0c3d0dc16dac3c3110a965a25f1bf87] srv-886a0874e3ba45b0818e8601387358bc Fri, 2018-09-14 21:00:04 [ff7b345ece86836996beee1d38aab850964bb779f9084373ecd1e290ab35da3e] srv-b6ae75535c9249e9810a13aee64261ec Sat, 2018-09-15 00:00:04 [75f9462c01f86c6220220b99dfb018f171a52e213cd53fab9505b3e127e427ca] srv-b41bcb1e914e4c7a9afee0d0cc161664 Sat, 2018-09-15 03:00:05 [8ace767e7c1abb707e0d7f4e1d976a24c89c0336b211976ef564d11360a51e33] srv-68384133c1ce45ad96c02e2a52906e8a Sat, 2018-09-15 06:00:05 [3c8a2153c217e6c03478772db4cc2a01f6a3ff3910c785c50efe9016922f5445] srv-d2083d432b9243bf89d314bb2fc65c54 Sat, 2018-09-15 09:00:04 [b889dc2da68870f38a11c06e4592806935ea9f192b5dffbec3190825cbbd9f32] srv-5042ecd186114756bd0f80f9b19817d5 Sat, 2018-09-15 12:00:04 [5019f92147664d28c0257a75cfefef6c3574e4bf3a409bc2b20f42e9a783c9b0] srv-f51711db800a449598c5821b5266c240 Sat, 2018-09-15 15:00:04 [be67fa17f2c3c3ce391ee2c6d77207f8279f1a04c33ed1a332733c44e18d562c] srv-06eeec8d2fd641d39ea2c675532f4c79 Sat, 2018-09-15 18:00:04 [3411dda838b3be3bf3e5433bc55a3ef476d5bee289628ced8cc175a7dfc8cdfd] They follow a scheme of --keep-daily=7 --keep-weekly=4 --keep-within 2d (enforced by pruning them) The idea is that - the 4 weeklies are "just in case", I will never need them since I make many changes in my chaotic system but, in case I remember I want something in a span of a month it will be there - the dailies are to have 7 days of backups, one per day - the "keep within 2 days" is to keep all backups I make within two days (they are started every 3 hours) As you can notice, I changed the naming scheme in the meantime and will need to wait ~two more weeks for the old one to be gone. In my risk profile, this allows to have a continuity of backups to recover from bad ideas for 2 days almost continuously (3 hours), then recover from possibly less bad ideas every day (for 7 days) and then from even less bad ideas a few times more, across a month. "Bad ideas" here mean some things I did and ended up not being the right ones to do. What is fantastic with borg is that this schedule actually does not matter for the backup. I could backup every 10 minutes and nothing would change (just more backups kept) - except that the machine would be backing up continuously. Disk wise it does not matter much in my case (the changes are small, if any) I could remove, say, srv-d2083d432b9243bf89d314bb2fc65c54 form today and I would have the data from srv-68384133c1ce45ad96c02e2a52906e8a and then srv-5042ecd186114756bd0f80f9b19817d5 I find that backing up "often" is the solution, and then follow the backup with a prune to actually keep what is needed works best for me Cheers, Wojtek Le sam. 15 sept. 2018 ? 18:39, Christian a ?crit : > Hi Marian, > > thanks again for your answer. > > Now I get it. That?s really fantastic. > I just tried it out by adding a new file to the original folder, then > backing it up and deleting the first backup. > And indeed the second backup holds all data including the newly created > one. > > borgbackup is really a wonderful programme. Now that I know how it works > I like it even better. > > Thanks again for your fast help. > > Have a nice weekend. > > Greetings. > Rosika > > Am 15.09.2018 um 17:57 schrieb Marian Beermann: > > Yes, that's indeed the case. > > > > Each archive can be independently manipulated from every other archive. > > Borg recognizes duplicate data (e.g. from Monday and Tuesday) backups > > and stores it only once; deleting Monday will not delete any data still > > used by Tuesday. > > > > -Marian > > > > On 9/15/18 5:48 PM, Christian wrote: > >> Hi Marian, > >> > >> > >> thank you so much for your fast answer. > >> > >> Yet it?s not quite clear to me how it works. > >> > >> Referring to my example: Do you say that I can delete the Monday-backup? > >> That would imply that all the data of the Monday-backup is present in > >> the Tuesday-backup as well. > >> > >> I was of the opinion that the Tuesday-backup only holds the new data. Or > >> did I get it wrong? > >> > >> Greetings > >> Rosika > >> > >> > >> > >> Am 15.09.2018 um 17:37 schrieb Marian Beermann: > >>> You don't have to think about any of that. > >>> > >>> You can create an archive. > >>> You can extract it. > >>> Deleting other archives makes no difference. > >>> > >>> -Marian > >>> > >>> PS: That's the trade-off made by Borg and similar software. Requires > >>> more CPU and memory resources, but gets rid of inter-backup > dependencies. > >>> > >>> On 9/15/18 5:27 PM, Christian wrote: > >>>> Hi altogether, > >>>> > >>>> I?ve got a question regarding restoration of backups. > >>>> > >>>> I looked at > https://borgbackup.readthedocs.io/en/stable/quickstart.html > >>>> for information. But somehow I couldn?t find what I was looking for. > >>>> > >>>> Out of sheer interest: > >>>> > >>>> Suppose I created a backup with "borg create /path/to/repo::Monday > ~/src > >>>> ~/Documents" . > >>>> And the next day: "borg create --stats /path/to/repo::Tuesday ~/src > >>>> ~/Documents" . > >>>> > >>>> It says: "This backup will be a lot quicker and a lot smaller since > only > >>>> new never before seen data is stored." > >>>> > >>>> O.K., I understand. But what if I wanted to restore the latest backup > >>>> (the "Tuesday" one)? > >>>> Since this one only holds the new data I suppose I?d have to keep the > >>>> "Monday" backup, too. > >>>> > >>>> But what would be the correct command? Would it be "borg extract > >>>> /path/to/repo::Tuesday" ? > >>>> After all I want to retore the latest backup. > >>>> Would "borg extract /path/to/repo::Tuesday" take care of the "Monday" > >>>> backup automatically since the bulk of the data is stored there? > >>>> > >>>> Thanks a lot for your help. > >>>> > >>>> Greetings > >>>> Rosika > >>>> > >>>> _______________________________________________ > >>>> Borgbackup mailing list > >>>> Borgbackup at python.org > >>>> https://mail.python.org/mailman/listinfo/borgbackup > >>>> > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abelschreck3 at freenet.de Sun Sep 16 08:48:38 2018 From: abelschreck3 at freenet.de (Christian) Date: Sun, 16 Sep 2018 14:48:38 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> Message-ID: Hi Wojtek, thanks for sharing your profile. It?s very impressive how you manage to handle those various possibilities borg offers. One has to dig really deep and get acquainted with all the options to make the best of it. As I still have another question, but on another topic, I think it?s best to post it as a separate item on the mailing list. Greetings Rosika Am 15.09.2018 um 19:06 schrieb Wojtek Swiatek: They follow a scheme of?--keep-daily=7 --keep-weekly=4 --keep-within 2d (enforced by pruning them) The idea is that? - the 4 weeklies are "just in case", I will never need them since I make many changes in my chaotic system but, in case I remember I want something in a span of a month it will be there - the dailies are to have 7 days of? backups, one per day - the "keep within 2 days" is to keep all backups I make within two days (they are started every 3 hours) As you can notice, I changed the naming scheme in the meantime and will need to wait ~two more weeks for the old one to be gone. In my risk profile, this allows to have a continuity of backups to recover from bad ideas for 2 days almost continuously (3 hours), then recover from possibly? less bad ideas every day (for 7 days) and then from even less bad ideas a few times more, across a month. "Bad ideas" here mean some things I did and ended up not being the right ones to do. What is fantastic with borg is that this schedule actually does not matter for the backup. I could backup every 10 minutes and nothing would change? (just more backups kept) - except that the machine would be backing up continuously. Disk wise it does not matter much in my case (the changes are small, if any) I could remove, say, srv-d2083d432b9243bf89d314bb2fc65c54 form today and I would have the data from srv-68384133c1ce45ad96c02e2a52906e8a? and then srv-5042ecd186114756bd0f80f9b19817d5 I find that backing up "often" is the solution, and then follow the backup with a prune to actually keep what is needed works best for me Cheers, Wojtek > > Le?sam. 15 sept. 2018 ??18:39, Christian > a ?crit?: > > Hi Marian, > > thanks again for your answer. > > Now I get it. That?s really fantastic. > I just tried it out by adding a new file to the original folder, then > backing it up and deleting the first backup. > And indeed the second backup holds all data including the newly > created one. > > borgbackup is really a wonderful programme. Now that I know how it > works > I like it even better. > > Thanks again for your fast help. > > Have a nice weekend. > > Greetings. > Rosika > > Am 15.09.2018 um 17:57 schrieb Marian Beermann: > > Yes, that's indeed the case. > > > > Each archive can be independently manipulated from every other > archive. > > Borg recognizes duplicate data (e.g. from Monday and Tuesday) > backups > > and stores it only once; deleting Monday will not delete any > data still > > used by Tuesday. > > > > -Marian > > > > On 9/15/18 5:48 PM, Christian wrote: > >> Hi Marian, > >> > >> > >> thank you so much for your fast answer. > >> > >> Yet it?s not quite clear to me how it works. > >> > >> Referring to my example: Do you say that I can delete the > Monday-backup? > >> That would imply that all the data of the Monday-backup is > present in > >> the Tuesday-backup as well. > >> > >> I was of the opinion that the Tuesday-backup only holds the new > data. Or > >> did I get it wrong? > >> > >> Greetings > >> Rosika > >> > >> > >> > >> Am 15.09.2018 um 17:37 schrieb Marian Beermann: > >>> You don't have to think about any of that. > >>> > >>> You can create an archive. > >>> You can extract it. > >>> Deleting other archives makes no difference. > >>> > >>> -Marian > >>> > >>> PS: That's the trade-off made by Borg and similar software. > Requires > >>> more CPU and memory resources, but gets rid of inter-backup > dependencies. > >>> > >>> On 9/15/18 5:27 PM, Christian wrote: > >>>> Hi altogether, > >>>> > >>>> I?ve got a question regarding restoration of backups. > >>>> > >>>> I looked at > https://borgbackup.readthedocs.io/en/stable/quickstart.html > >>>> for information. But somehow I couldn?t find what I was > looking for. > >>>> > >>>> Out of sheer interest: > >>>> > >>>> Suppose I created a backup with "borg create > /path/to/repo::Monday ~/src > >>>> ~/Documents" . > >>>> And the next day: "borg create --stats /path/to/repo::Tuesday > ~/src > >>>> ~/Documents" . > >>>> > >>>> It says: "This backup will be a lot quicker and a lot smaller > since only > >>>> new never before seen data is stored." > >>>> > >>>> O.K., I understand. But what if I wanted to restore the > latest backup > >>>> (the "Tuesday" one)? > >>>> Since this one only holds the new data I suppose I?d have to > keep the > >>>> "Monday" backup, too. > >>>> > >>>> But what would be the correct command? Would it be? "borg extract > >>>> /path/to/repo::Tuesday" ? > >>>> After all I want to retore the latest backup. > >>>> Would "borg extract /path/to/repo::Tuesday"? take care of the > "Monday" > >>>> backup automatically since the bulk of the data is stored there? > >>>> > >>>> Thanks a lot for your help. > >>>> > >>>> Greetings > >>>> Rosika > >>>> > >>>> _______________________________________________ > >>>> Borgbackup mailing list > >>>> Borgbackup at python.org > >>>> https://mail.python.org/mailman/listinfo/borgbackup > >>>> > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abelschreck3 at freenet.de Sun Sep 16 08:58:06 2018 From: abelschreck3 at freenet.de (Christian) Date: Sun, 16 Sep 2018 14:58:06 +0200 Subject: [Borgbackup] backup of home partition Message-ID: <0baa392a-8099-993c-403b-0feb7d0e0c55@freenet.de> Hi altogether, I?ve got a question concerning the backup of the home-partition. ---------------------------------------------------------- Info My system: Linux/Lubuntu 16.04.5 LTS, 64 bit ---------------------------------------------------------- What if I want to backup my entire home-partition? How does borg go about doing that? More specifically: Can I use my system during backup? Because when using it there will certainly be write-processes going on to certain files. Is there some kind of snapshot taken by borg before beginning to backup or am I not supposed to use my computer during backup? Greetings Rosika From felix.schwarz at oss.schwarz.eu Sun Sep 16 09:37:46 2018 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Sun, 16 Sep 2018 15:37:46 +0200 Subject: [Borgbackup] backup of home partition In-Reply-To: <0baa392a-8099-993c-403b-0feb7d0e0c55@freenet.de> References: <0baa392a-8099-993c-403b-0feb7d0e0c55@freenet.de> Message-ID: <235481d5-2b63-f492-936e-c5d2ce33e285@oss.schwarz.eu> Am 16.09.18 um 14:58 schrieb Christian: > What if I want to backup my entire home-partition? How does borg go > about doing that? > More specifically: Can I use my system during backup? Because when using > it there will certainly be write-processes going on to certain files. > Is there some kind of snapshot taken by borg before beginning to backup > or am I not supposed to use my computer during backup? I think there is not a yes/no answer to your question. Basically you are asking for "a consistent state". borg will not do anything special so it will just backup the files without trying to ensure consistency (but it can handle vanishing files without crashing). Consistent backups require external help (e.g. running a script before/after the backup) because files are only part of the issue. If you want to backup a database usually you need to run some DB-specific scripts (e.g. to dump the DB as SQL file). So if you really need a consistent backup state you need to run additional scripts. For example LVM/btrfs can create file system snapshots. Personally I can just ignore the problem for my desktop machines (now using borg for quite a few years without problems). Felix From abelschreck3 at freenet.de Sun Sep 16 10:42:02 2018 From: abelschreck3 at freenet.de (Christian) Date: Sun, 16 Sep 2018 16:42:02 +0200 Subject: [Borgbackup] backup of home partition Message-ID: Hi Felix, thanks a lot for your answer. > [...] (but it can handle vanishing files without crashing). Fine, that sounds good. > So if you really need a consistent backup state you need to run additional scripts. Yeah, that more or less was my question. I really don?t know whether or not I *need *a consistent backup state. I just wanted to know if borg can handle situations in which files may change during the backup process. > Personally I can just ignore the problem for my desktop machines (now using borg for quite a few years without problems) But that sounds good. Thanks for clarifying the matter. Have a nice day. Greetings Rosika -------------- next part -------------- An HTML attachment was scrubbed... URL: From w at swtk.info Mon Sep 17 03:05:13 2018 From: w at swtk.info (Wojtek Swiatek) Date: Mon, 17 Sep 2018 09:05:13 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> Message-ID: The "eureka" moment for me was when I realized that the best combo for my backup scenario is backup + pruning. I backup often and then immediately make sure afterwards that I only keep what is needed. For what it's worth, I use a systemd timer to manage that: [Unit] Description=borg backup [Service] Type=oneshot Environment=BORG_REPO=/services/backup/borg/ Environment=BORG_HOSTNAME_IS_UNIQUE=yes ExecStart=/usr/bin/borg create --filter AME --exclude-from=/services/backup/borg-exclude-srv.txt --list --stats ::srv-${INVOCATION_ID} / ExecStartPost=/usr/bin/borg prune -v --list --keep-daily=7 --keep-weekly=4 --keep-within 2d --stats Cheers Wojtek Le dim. 16 sept. 2018 ? 14:51, Christian a ?crit : > Hi Wojtek, > > thanks for sharing your profile. It?s very impressive how you manage to > handle those various possibilities borg offers. > One has to dig really deep and get acquainted with all the options to make > the best of it. > > As I still have another question, but on another topic, I think it?s best > to post it as a separate item on the mailing list. > > Greetings > Rosika > > > Am 15.09.2018 um 19:06 schrieb Wojtek Swiatek: > > They follow a scheme of --keep-daily=7 --keep-weekly=4 --keep-within 2d > (enforced by pruning them) > The idea is that > - the 4 weeklies are "just in case", I will never need them since I make > many changes in my chaotic system but, in case I remember I want something > in a span of a month it will be there > - the dailies are to have 7 days of backups, one per day > - the "keep within 2 days" is to keep all backups I make within two days > (they are started every 3 hours) > As you can notice, I changed the naming scheme in the meantime and will > need to wait ~two more weeks for the old one to be gone. > > In my risk profile, this allows to have a continuity of backups to recover > from bad ideas for 2 days almost continuously (3 hours), then recover from > possibly > less bad ideas every day (for 7 days) and then from even less bad ideas a > few times more, across a month. > "Bad ideas" here mean some things I did and ended up not being the right > ones to do. > > What is fantastic with borg is that this schedule actually does not matter > for the backup. I could backup every 10 minutes and nothing would change > (just more backups kept) - except that the machine would be backing up > continuously. > Disk wise it does not matter much in my case (the changes are small, if > any) > > I could remove, say, srv-d2083d432b9243bf89d314bb2fc65c54 form today and I > would have the data from srv-68384133c1ce45ad96c02e2a52906e8a > and then srv-5042ecd186114756bd0f80f9b19817d5 > > I find that backing up "often" is the solution, and then follow the backup > with a prune to actually keep what is needed works best for me > > Cheers, > Wojtek > > > > Le sam. 15 sept. 2018 ? 18:39, Christian a > ?crit : > >> Hi Marian, >> >> thanks again for your answer. >> >> Now I get it. That?s really fantastic. >> I just tried it out by adding a new file to the original folder, then >> backing it up and deleting the first backup. >> And indeed the second backup holds all data including the newly created >> one. >> >> borgbackup is really a wonderful programme. Now that I know how it works >> I like it even better. >> >> Thanks again for your fast help. >> >> Have a nice weekend. >> >> Greetings. >> Rosika >> >> Am 15.09.2018 um 17:57 schrieb Marian Beermann: >> > Yes, that's indeed the case. >> > >> > Each archive can be independently manipulated from every other archive. >> > Borg recognizes duplicate data (e.g. from Monday and Tuesday) backups >> > and stores it only once; deleting Monday will not delete any data still >> > used by Tuesday. >> > >> > -Marian >> > >> > On 9/15/18 5:48 PM, Christian wrote: >> >> Hi Marian, >> >> >> >> >> >> thank you so much for your fast answer. >> >> >> >> Yet it?s not quite clear to me how it works. >> >> >> >> Referring to my example: Do you say that I can delete the >> Monday-backup? >> >> That would imply that all the data of the Monday-backup is present in >> >> the Tuesday-backup as well. >> >> >> >> I was of the opinion that the Tuesday-backup only holds the new data. >> Or >> >> did I get it wrong? >> >> >> >> Greetings >> >> Rosika >> >> >> >> >> >> >> >> Am 15.09.2018 um 17:37 schrieb Marian Beermann: >> >>> You don't have to think about any of that. >> >>> >> >>> You can create an archive. >> >>> You can extract it. >> >>> Deleting other archives makes no difference. >> >>> >> >>> -Marian >> >>> >> >>> PS: That's the trade-off made by Borg and similar software. Requires >> >>> more CPU and memory resources, but gets rid of inter-backup >> dependencies. >> >>> >> >>> On 9/15/18 5:27 PM, Christian wrote: >> >>>> Hi altogether, >> >>>> >> >>>> I?ve got a question regarding restoration of backups. >> >>>> >> >>>> I looked at >> https://borgbackup.readthedocs.io/en/stable/quickstart.html >> >>>> for information. But somehow I couldn?t find what I was looking for. >> >>>> >> >>>> Out of sheer interest: >> >>>> >> >>>> Suppose I created a backup with "borg create /path/to/repo::Monday >> ~/src >> >>>> ~/Documents" . >> >>>> And the next day: "borg create --stats /path/to/repo::Tuesday ~/src >> >>>> ~/Documents" . >> >>>> >> >>>> It says: "This backup will be a lot quicker and a lot smaller since >> only >> >>>> new never before seen data is stored." >> >>>> >> >>>> O.K., I understand. But what if I wanted to restore the latest backup >> >>>> (the "Tuesday" one)? >> >>>> Since this one only holds the new data I suppose I?d have to keep the >> >>>> "Monday" backup, too. >> >>>> >> >>>> But what would be the correct command? Would it be "borg extract >> >>>> /path/to/repo::Tuesday" ? >> >>>> After all I want to retore the latest backup. >> >>>> Would "borg extract /path/to/repo::Tuesday" take care of the >> "Monday" >> >>>> backup automatically since the bulk of the data is stored there? >> >>>> >> >>>> Thanks a lot for your help. >> >>>> >> >>>> Greetings >> >>>> Rosika >> >>>> >> >>>> _______________________________________________ >> >>>> Borgbackup mailing list >> >>>> Borgbackup at python.org >> >>>> https://mail.python.org/mailman/listinfo/borgbackup >> >>>> >> > >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gait at ATComputing.nl Mon Sep 17 04:58:12 2018 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Mon, 17 Sep 2018 10:58:12 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> Message-ID: Op 15-09-18 om 18:31 schreef Christian: > Now I get it. That?s really fantastic. > I just tried it out by adding a new file to the original folder, then > backing it up and deleting the first backup. > And indeed the second backup holds all data including the newly created one. > > borgbackup is really a wonderful programme. Now that I know how it works > I like it even better. > Hello, *In this context*, you can see a Borg-repository as a filesystem. If a file is in more than one archive, the link count will be >1 (1 link per archive). The file will only be removed when the link count tries to be zero. (Reminds me of my first use of BorgBackup, trying to store a file tree with much of the files having more than 1 hard link (up do 18 links). Although the problem was solved very quickly, BB choked on that: To much meta data!). (mind you: BB-links are not filesystem links) -- Met vriendelijke groeten / kind regards, AT COMPUTING BV Gerrit A. Smit AT Computing Telefoon: +31 24 352 72 22 D? one-stop-Linux-shop Telefoon cursussecretariaat: +31 24 352 72 72 Kerkenbos 12-38 TI at ATComputing.nl 6546 BE Nijmegen www.atcomputing.nl https://www.linkedin.com/in/gesmit From abelschreck3 at freenet.de Mon Sep 17 11:58:24 2018 From: abelschreck3 at freenet.de (Christian) Date: Mon, 17 Sep 2018 17:58:24 +0200 Subject: [Borgbackup] question regarding restoration of latest backup In-Reply-To: References: <96d7d391-c683-ee06-7697-6001f0f04812@freenet.de> <2e8c97b2-bdc9-cdf6-c69e-c29add2976bd@enkore.de> <6a439705-43b7-40f3-080e-0f938b5b0a9f@freenet.de> <7f72e50e-e551-e1ea-b96b-718fce448457@freenet.de> Message-ID: Hi Wojtek, thanks for sharing your insights. Well, that?s really a professional way of going about backing up. I?m impressed. Greetings Rosika Am 17.09.2018 um 09:05 schrieb Wojtek Swiatek: > The "eureka" moment for me was when I realized that the best combo for > my backup scenario is backup?+ pruning. I backup often and then > immediately make sure afterwards that I only keep what is needed. > For what it's worth, I use a systemd timer to manage that: > > [Unit] > Description=borg backup > > [Service] > Type=oneshot > Environment=BORG_REPO=/services/backup/borg/ > Environment=BORG_HOSTNAME_IS_UNIQUE=yes > ExecStart=/usr/bin/borg create --filter AME > --exclude-from=/services/backup/borg-exclude-srv.txt --list --stats > ::srv-${INVOCATION_ID} / > ExecStartPost=/usr/bin/borg prune -v --list --keep-daily=7 > --keep-weekly=4 --keep-within 2d --stats > > Cheers > Wojtek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abelschreck3 at freenet.de Tue Sep 18 07:08:39 2018 From: abelschreck3 at freenet.de (Christian) Date: Tue, 18 Sep 2018 13:08:39 +0200 Subject: [Borgbackup] question regarding restoration of latest, backup Message-ID: <6e55c157-4a11-306c-0b4f-b5aa572a36d3@freenet.de> Hi Gerrit, thanks for the clarification. > If a file is in more than one archive, the link count will be >1 (1 link per archive). The file will only be removed when the link count tries to be zero. Now it?s become much more plausible. Greetings Rosika From sitaramc at gmail.com Thu Sep 27 00:47:40 2018 From: sitaramc at gmail.com (Sitaram Chamarty) Date: Thu, 27 Sep 2018 10:17:40 +0530 Subject: [Borgbackup] dnf upgrade on fedora 28 appears to break borg Message-ID: <20180927044740.GA8135@sita-lt.atc.tcs.com> A "dnf upgrade" on fedora 28 appears to break borg. Not sure what "msgpack" is but perhaps they made some unexpected change between 0.5.5 and 0.5.6? For now I downgraded by `dnf install python3-msgpack-0.5.5`, which also downgraded borg to 1.1.4. 1.1.4 is fine for me, so I actually don't have an issue for now, but I thought I should report it anyway. regards sitaram (The following is from before the downgrade of course) # borg -V Traceback (most recent call last): File "/usr/bin/borg", line 6, in from pkg_resources import load_entry_point File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3095, in @_call_aside File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3079, in _call_aside f(*args, **kwargs) File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3108, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 570, in _build_master ws.require(__requires__) File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 888, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 774, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'msgpack-python!=0.5.0,!=0.5.1,!=0.5.2,!=0.5.3,!=0.5.4,!=0.5.5,<=0.5.6,>=0.4.6' distribution was not found and is required by borgbackup # rpm -qa --last | grep -i -e msgpack -e borg borgbackup-1.1.7-1.fc28.x86_64 Thu 27 Sep 2018 09:27:34 AM IST msgpack-3.1.0-1.fc28.x86_64 Wed 26 Sep 2018 05:50:50 PM IST python3-msgpack-0.5.6-5.fc28.x86_64 Tue 25 Sep 2018 10:28:13 AM IST From cl_111 at hotmail.com Sat Sep 29 03:37:33 2018 From: cl_111 at hotmail.com (C L) Date: Sat, 29 Sep 2018 07:37:33 +0000 Subject: [Borgbackup] questions about append-only mode repository Message-ID: Hi Folks! I've been trialing borgbackup 1.1.x for a short time now and found it to be ticking all the boxes so far. However I'm trying to wrap my head around use-cases for append-only mode when it applies to multiple client machines accessing a central remote repository and whether this functionality is currently feature complete or should even be used in such scenarios. Based on what I've read in the documentation, a repository can be made ?append-only?, which means that Borg will never overwrite or delete committed data. However, the documentation continues with an example of a compromised client machine that has remotely deleted backups from the repository. On the surface of it, I would expect an append-only repository to deny any remote "borg delete" or "borg prune" commands to occur from any borg client. Instead from the documentation a "soft-delete" is permitted on the repository and the transaction logged. Such "soft-deleted" transactions are (silently?) processed only when the repository is accessed in a non-append-only mode with an appropriate "borg {delete,prune,create}" command, typically executed from a more trusted machine than the client machines. For an non-compromised client machine running a scheduled backup job which applies it's own "borg prune" rules onto archives prefixed by it's hostname seems like overkill considering the administrator would have to run "borg prune" from a more trusted machine and apply it across all the archives in the repository, irrespective of prefix. A potential race condition exists between a compromised, but undetected, client machine that has "soft-deleted" archives from the repository and the trusted machine that next "borg prunes" the repository. There is obviously a sliding scale with: * the level of trust/risk that any client machine has to the repository; and * on the amount of work an administrator must perform to maintain backup sets and yet provide some flexibility with global/per-client machine retention policy; and * to detect and react to compromised client machines which have access to the repository. To the borg assimilated community, I have the current questions: 1. As currently implemented, are append-only mode repositories just more work to maintain with little reward, or is that just my initial, inexperienced impression with borg? 2. What real-world use-cases is an append-only mode repository with prunes (no plums involved haha) actually being used, if at all? 3. Is the documentation missing a really obvious point with append-mode repositories that is clear to everyone having expert borg knowledge but hasn't occurred to those with novice borg knowledge? 4. Was the implementation of an append-only mode feature a knee-jerk reaction to "fix" something without addressing the real core problem/risk underlying the feature requested (i.e.: mitigating the risk destructive operations has to a repository from borg clients on untrusted/semi-trusted client machines)? Sincerely, and with great respect. Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Sat Sep 29 04:30:29 2018 From: public at enkore.de (Marian Beermann) Date: Sat, 29 Sep 2018 10:30:29 +0200 Subject: [Borgbackup] questions about append-only mode repository In-Reply-To: References: Message-ID: <9f8bb300-e563-0f06-5ec9-a69047bfadc7@enkore.de> Hi Christian, On 9/29/18 9:37 AM, C L wrote: > Hi Folks! > > I've been trialing borgbackup 1.1.x for a short time now and found it to > be ticking all the boxes so far. > > However I'm trying to wrap my head around use-cases for append-only mode > when it applies to multiple client machines accessing a central remote > repository and whether this functionality is currently feature complete > or should even be used in such scenarios. append-only mode is a misfeature that was implemented because everyone assumed the "proper solution" was only a few months away (a common theme). It exposes a low-level implementation detail and requires a lot of low-level knowledge about borg to use "correctly". > Based on what I've read in the documentation, a repository can be made > ?append-only?, which means that Borg will never overwrite or delete > committed data.? However, the documentation continues with an example of > a compromised client machine that has remotely deleted backups from the > repository. > > On the surface of it, I would expect an append-only repository to deny > any remote "borg delete" or "borg prune" commands to occur from any borg > client. > > Instead from the documentation a "soft-delete" is permitted on the > repository and the transaction logged.? Such "soft-deleted" transactions > are (silently?) processed only when the repository is accessed in a > non-append-only mode with an appropriate "borg {delete,prune,create}" > command, typically executed from a more trusted machine than the client > machines. > > For an non-compromised client machine running a scheduled backup job > which applies it's own "borg prune" rules onto archives prefixed by it's > hostname seems like overkill considering the administrator would have to > run "borg prune" from a more trusted machine and apply it across all the > archives in the repository, irrespective of prefix. > > A potential race condition exists between a compromised, but undetected, > client machine that has "soft-deleted" archives from the repository and > the trusted machine that next "borg prunes" the repository. Correct. > There is obviously a sliding scale with: > > * the level of trust/risk that any client machine has to the > repository; and > * on the amount of work an administrator must perform to maintain > backup sets and yet provide some flexibility with global/per-client > machine retention policy; and > * to detect and react to compromised client machines which have access > to the repository. > > To the borg assimilated community, I have the current questions: > > 1. As currently implemented, are append-only mode repositories just > more work to maintain with little reward, or is that just my > initial, inexperienced impression with borg? Yes. > 2. What real-world use-cases is an append-only mode repository with > prunes (no plums involved haha) actually being used, if at all? Without additional, external tooling (locking out untrusted clients for maintenance, running checks on archives, comparing to what should be there etc.) it is not a useful feature if things ought to be deleted at some point. > 3. Is the documentation missing a really obvious point with append-mode > repositories that is clear to everyone having expert borg knowledge > but hasn't occurred to those with novice borg knowledge? Understanding append only mode requires knowledge of at least all the internal docs. So, yeah. > 4. Was the implementation of an append-only mode feature a knee-jerk > reaction to "fix" something without addressing the real core > problem/risk underlying the feature requested (i.e.: mitigating the > risk destructive operations has to a repository from borg clients on > untrusted/semi-trusted client machines)? Correct. > > Sincerely, and with great respect. > > Christian > Cheers, Marian From cl_111 at hotmail.com Sat Sep 29 05:07:30 2018 From: cl_111 at hotmail.com (C L) Date: Sat, 29 Sep 2018 09:07:30 +0000 Subject: [Borgbackup] questions about append-only mode repository In-Reply-To: <9f8bb300-e563-0f06-5ec9-a69047bfadc7@enkore.de> References: , <9f8bb300-e563-0f06-5ec9-a69047bfadc7@enkore.de> Message-ID: Thanks Marian for the quick response and confirming for me that something indeed was off about this feature. I'm glad in the end it wasn't this nut behind the wheel just not getting "it" haha Might I recommend that this section in the documentation for version 1.1.x be marked as "experimental", similar to "borg recreate". If no one has created such an issue for it then I'll be happy to raise. Christian ________________________________ From: Marian Beermann Sent: Saturday, 29 September 2018 4:30 PM To: C L; borgbackup at python.org Subject: Re: [Borgbackup] questions about append-only mode repository Hi Christian, On 9/29/18 9:37 AM, C L wrote: > Hi Folks! > > I've been trialing borgbackup 1.1.x for a short time now and found it to > be ticking all the boxes so far. > > However I'm trying to wrap my head around use-cases for append-only mode > when it applies to multiple client machines accessing a central remote > repository and whether this functionality is currently feature complete > or should even be used in such scenarios. append-only mode is a misfeature that was implemented because everyone assumed the "proper solution" was only a few months away (a common theme). It exposes a low-level implementation detail and requires a lot of low-level knowledge about borg to use "correctly". > Based on what I've read in the documentation, a repository can be made > ?append-only?, which means that Borg will never overwrite or delete > committed data. However, the documentation continues with an example of > a compromised client machine that has remotely deleted backups from the > repository. > > On the surface of it, I would expect an append-only repository to deny > any remote "borg delete" or "borg prune" commands to occur from any borg > client. > > Instead from the documentation a "soft-delete" is permitted on the > repository and the transaction logged. Such "soft-deleted" transactions > are (silently?) processed only when the repository is accessed in a > non-append-only mode with an appropriate "borg {delete,prune,create}" > command, typically executed from a more trusted machine than the client > machines. > > For an non-compromised client machine running a scheduled backup job > which applies it's own "borg prune" rules onto archives prefixed by it's > hostname seems like overkill considering the administrator would have to > run "borg prune" from a more trusted machine and apply it across all the > archives in the repository, irrespective of prefix. > > A potential race condition exists between a compromised, but undetected, > client machine that has "soft-deleted" archives from the repository and > the trusted machine that next "borg prunes" the repository. Correct. > There is obviously a sliding scale with: > > * the level of trust/risk that any client machine has to the > repository; and > * on the amount of work an administrator must perform to maintain > backup sets and yet provide some flexibility with global/per-client > machine retention policy; and > * to detect and react to compromised client machines which have access > to the repository. > > To the borg assimilated community, I have the current questions: > > 1. As currently implemented, are append-only mode repositories just > more work to maintain with little reward, or is that just my > initial, inexperienced impression with borg? Yes. > 2. What real-world use-cases is an append-only mode repository with > prunes (no plums involved haha) actually being used, if at all? Without additional, external tooling (locking out untrusted clients for maintenance, running checks on archives, comparing to what should be there etc.) it is not a useful feature if things ought to be deleted at some point. > 3. Is the documentation missing a really obvious point with append-mode > repositories that is clear to everyone having expert borg knowledge > but hasn't occurred to those with novice borg knowledge? Understanding append only mode requires knowledge of at least all the internal docs. So, yeah. > 4. Was the implementation of an append-only mode feature a knee-jerk > reaction to "fix" something without addressing the real core > problem/risk underlying the feature requested (i.e.: mitigating the > risk destructive operations has to a repository from borg clients on > untrusted/semi-trusted client machines)? Correct. > > Sincerely, and with great respect. > > Christian > Cheers, Marian -------------- next part -------------- An HTML attachment was scrubbed... URL: