From ndbecker2 at gmail.com Sat Jan 7 08:00:15 2017 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 7 Jan 2017 08:00:15 -0500 Subject: [Borgbackup] safe to interrupt backup? Message-ID: Is it safe to siginterrupt a backup? -- *Those who don't understand recursion are doomed to repeat it* -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Sat Jan 7 08:21:48 2017 From: public at enkore.de (Marian Beermann) Date: Sat, 7 Jan 2017 14:21:48 +0100 Subject: [Borgbackup] safe to interrupt backup? In-Reply-To: References: Message-ID: <1d219a14-f918-004f-e2eb-496bf492875f@enkore.de> On 07.01.2017 14:00, Neal Becker wrote: > Is it safe to siginterrupt a backup? Yes From roland at micite.net Sat Jan 7 10:57:01 2017 From: roland at micite.net (Roland van Laar) Date: Sat, 7 Jan 2017 16:57:01 +0100 Subject: [Borgbackup] safe to interrupt backup? In-Reply-To: <1d219a14-f918-004f-e2eb-496bf492875f@enkore.de> References: <1d219a14-f918-004f-e2eb-496bf492875f@enkore.de> Message-ID: <341e217a-c987-7b5c-efad-f3939c8ec9a1@micite.net> On 07-01-17 14:21, Marian Beermann wrote: > On 07.01.2017 14:00, Neal Becker wrote: >> Is it safe to siginterrupt a backup? > Yes Try it. Borg make checkpoints at regular intervals. Next time you run the backup it will continue at the latest checkpoint. - Roland > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From elladan at eskimo.com Thu Jan 12 02:34:05 2017 From: elladan at eskimo.com (Elladan) Date: Wed, 11 Jan 2017 23:34:05 -0800 Subject: [Borgbackup] safe to interrupt backup? In-Reply-To: <341e217a-c987-7b5c-efad-f3939c8ec9a1@micite.net> References: <1d219a14-f918-004f-e2eb-496bf492875f@enkore.de> <341e217a-c987-7b5c-efad-f3939c8ec9a1@micite.net> Message-ID: On Sat, Jan 7, 2017 at 7:57 AM, Roland van Laar via Borgbackup < borgbackup at python.org> wrote: > On 07-01-17 14:21, Marian Beermann wrote: > >> On 07.01.2017 14:00, Neal Becker wrote: >> >>> Is it safe to siginterrupt a backup? >>> >> Yes >> > Try it. Borg make checkpoints at regular intervals. > Next time you run the backup it will continue at the latest checkpoint. It leaks the checkpoints if your new backup has a different name than the interrupted one, which will generally always be the case since they're usually timestamped. You need to go delete the checkpoints by hand once a backup finishes successfully. I'd say this is a bug. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Jan 12 08:43:33 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 12 Jan 2017 14:43:33 +0100 Subject: [Borgbackup] safe to interrupt backup? In-Reply-To: References: <1d219a14-f918-004f-e2eb-496bf492875f@enkore.de> <341e217a-c987-7b5c-efad-f3939c8ec9a1@micite.net> Message-ID: <8feeaa37-9974-808e-6072-84d730f29e83@waldmann-edv.de> > Is it safe to siginterrupt a backup? > Yes > Try it. Borg make checkpoints at regular intervals. > Next time you run the backup it will continue at the latest checkpoint. Note that the last phrase is not totally true. It *feels* like it is continuing from where it was interrupted, but what it in fact does is to create an independent complete new backup archive. It does the same thing as when no backup has been interrupted before. Why it feels (speed-wise) like it was continuing is because of this: - the files cache has remembered mtime/size/inode and chunkids of all files until the last checkpoint, so the full backup will very quickly skip all the files that have not been modified since (I've seen up to 10000/s). - if a file has been modified, it will not store the chunks again it already has in the repo > It leaks the checkpoints if your new backup has a different name than > the interrupted one, which will generally always be the case since > they're usually timestamped. > > You need to go delete the checkpoints by hand once a backup finishes > successfully. I'd say this is a bug. No, this is not a bug, it is working as intended. checkpoint archives are just not touched at all because it just makes a new backup archive (see above). there is no such thing as "continuation of a checkpoint", thus there is no checkpoint removal at borg create time. A while ago, I added some special treatment for checkpoint archives to borg prune though: it will only keep a checkpoint if it is latest and has not been superceded by a completed backup archive. This assumes that --prefix is used reasonably to select archives with one specific data set. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From wtraylor at areyouthinking.org Thu Jan 12 16:18:50 2017 From: wtraylor at areyouthinking.org (Walker Traylor) Date: Fri, 13 Jan 2017 04:18:50 +0700 Subject: [Borgbackup] Running borg tests in vagrant with recent commits Message-ID: Is there a process for pulling recent code and rerunning tests manually within the vagrant instances, without having to destroy and reup the vagrant boxes? Thanks, Walker Traylor walker at walkertraylor.com m: +1.703.389.4507 skype: wtraylor linkedin.com/in/walkertraylor -------------- next part -------------- An HTML attachment was scrubbed... URL: From elladan at eskimo.com Sun Jan 15 01:19:07 2017 From: elladan at eskimo.com (Elladan) Date: Sat, 14 Jan 2017 22:19:07 -0800 Subject: [Borgbackup] safe to interrupt backup? In-Reply-To: <8feeaa37-9974-808e-6072-84d730f29e83@waldmann-edv.de> References: <1d219a14-f918-004f-e2eb-496bf492875f@enkore.de> <341e217a-c987-7b5c-efad-f3939c8ec9a1@micite.net> <8feeaa37-9974-808e-6072-84d730f29e83@waldmann-edv.de> Message-ID: On Thu, Jan 12, 2017 at 5:43 AM, Thomas Waldmann wrote: > > > It leaks the checkpoints if your new backup has a different name than > > the interrupted one, which will generally always be the case since > > they're usually timestamped. > > > > You need to go delete the checkpoints by hand once a backup finishes > > successfully. I'd say this is a bug. > > No, this is not a bug, it is working as intended. > > checkpoint archives are just not touched at all because it just makes a > new backup archive (see above). there is no such thing as "continuation > of a checkpoint", thus there is no checkpoint removal at borg create time. > > A while ago, I added some special treatment for checkpoint archives to > borg prune though: it will only keep a checkpoint if it is latest and > has not been superceded by a completed backup archive. This assumes that > --prefix is used reasonably to select archives with one specific data set. I see the code you added on the head, but it's not there in 1.0.9: the current release completely ignores checkpoint archives when running prune. So: it currently leaks checkpoints when used in the normal way, requiring manual cleanup. This happens to me a lot, so I'll enjoy the new prune code. :-) Thanks, Justin From tw at waldmann-edv.de Sun Jan 15 14:11:51 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 15 Jan 2017 20:11:51 +0100 Subject: [Borgbackup] borgbackup beta 1.1.0b3 released Message-ID: <3400d9f8-bad7-620c-6591-43aa065aeba7@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.1.0b3 More details: see URL above. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Mon Jan 16 11:48:16 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 16 Jan 2017 17:48:16 +0100 Subject: [Borgbackup] Running borg tests in vagrant with recent commits In-Reply-To: References: Message-ID: <2fc7044a-2843-cd2e-ae1a-b3afaeb6b761@waldmann-edv.de> On 01/12/2017 10:18 PM, Walker Traylor wrote: > Is there a process for pulling recent code and rerunning tests manually > within the vagrant instances, without having to destroy and reup the > vagrant boxes? There is no automation for that yet. But you can ofc use vagrant rsync, vagrant ssh and do some commands manually that are usually done via the Vagrantfile scripts. If you like, you could script that also and put it into scripts/ and make a PR. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From debacle at debian.org Sat Jan 21 16:58:47 2017 From: debacle at debian.org (W. Martin Borgert) Date: Sat, 21 Jan 2017 22:58:47 +0100 Subject: [Borgbackup] Resuming restore? Message-ID: <20170121215847.jqb2sux56e7rarmo@fama> Hi, I had a very long restore (huge data, but low bandwidth) and the connection failed at some point. I just restarted, and had the impression, that borg started all over. At the end, the restore succeeded, but I'm curious whether borg is able to resume a restore operation in case of connection failure. TIA! From public at enkore.de Sat Jan 21 17:12:32 2017 From: public at enkore.de (Marian Beermann) Date: Sat, 21 Jan 2017 23:12:32 +0100 Subject: [Borgbackup] Resuming restore? In-Reply-To: <20170121215847.jqb2sux56e7rarmo@fama> References: <20170121215847.jqb2sux56e7rarmo@fama> Message-ID: <1d2aa2dc-06f9-a5fd-d965-fc085473f15f@enkore.de> Hi, this isn't supported out of the box. A not entirely working patch exists: https://github.com/borgbackup/borg/pull/1665 There are basically two-to-three options: 1. Look at the extraction result and gauge which folders are complete and which not. Sometimes this might be obvious, and borg-list will help here, since stuff is extracted in the same order, so if borg list says: foo1/... foo2/... foo3/... And you see: foo1/... foo2/... Then you know for certain that foo1 was completely extracted. foo2/ probably not. This can be used to exclude the finished paths when resuming the operation (extract takes --exclude just like create). 2. Your spidey sense tingles and tells you that the conection is flakey. In this case, use --list (or in 1.1 --progress/-p) to have it show where it is now. When it then aborts due to connection issues, you directly see where it left off and what you can exclude when resuming 3. Avoid having the connection break visibly to borg, by fiddling with SSH timeouts and all that. Cheers, Marian PS: It would of course be nicer if, especially for read-only operations, Borg would be able to just re-attempt the connection without involving the user. On 21.01.2017 22:58, W. Martin Borgert wrote: > Hi, > > I had a very long restore (huge data, but low bandwidth) and the > connection failed at some point. I just restarted, and had the > impression, that borg started all over. At the end, the restore > succeeded, but I'm curious whether borg is able to resume a > restore operation in case of connection failure. > > TIA! > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From debacle at debian.org Sat Jan 21 17:34:07 2017 From: debacle at debian.org (W. Martin Borgert) Date: Sat, 21 Jan 2017 23:34:07 +0100 Subject: [Borgbackup] Resuming restore? In-Reply-To: <1d2aa2dc-06f9-a5fd-d965-fc085473f15f@enkore.de> References: <20170121215847.jqb2sux56e7rarmo@fama> <1d2aa2dc-06f9-a5fd-d965-fc085473f15f@enkore.de> Message-ID: <20170121223407.ynkcyojpvcdzux6v@fama> On 2017-01-21 23:12, Marian Beermann wrote: > this isn't supported out of the box. A not entirely working patch > exists: https://github.com/borgbackup/borg/pull/1665 Nice. > Look at the extraction result and gauge which folders are complete and > which not. Sometimes this might be obvious, and borg-list will help > here, since stuff is extracted in the same order, so if borg list says: That is exactly what I did, and it worked fine :~) > PS: It would of course be nicer if, especially for read-only operations, > Borg would be able to just re-attempt the connection without involving > the user. Yes, but resume (or --continue) would even work, if the client had to reboot in between. Both would be useful, I think. Many thanks for the useful information, Marian! From debacle at debian.org Sat Jan 21 17:41:57 2017 From: debacle at debian.org (W. Martin Borgert) Date: Sat, 21 Jan 2017 23:41:57 +0100 Subject: [Borgbackup] Backup without remote borg installed? Message-ID: <20170121224156.mqfa2p5jtp2bk7m6@fama> Hi, I'm trying to backup to a server that does not have borg installed. My local borg tells me: $ borg init ssh://myuser at myserver:myport/home/myuser/myrepo Remote: bash: borg: command not found Connection closed by remote host. Is borg working on the server? Is this mode of operation supported? Am I doing something wrong? TIA! From public at enkore.de Sat Jan 21 17:45:48 2017 From: public at enkore.de (Marian Beermann) Date: Sat, 21 Jan 2017 23:45:48 +0100 Subject: [Borgbackup] Backup without remote borg installed? In-Reply-To: <20170121224156.mqfa2p5jtp2bk7m6@fama> References: <20170121224156.mqfa2p5jtp2bk7m6@fama> Message-ID: <6fdf742b-d6c1-946c-0cd1-38f8251cd0bd@enkore.de> Hi, in these cases where it is not possible to install Borg at the repository location, one can use a network FS like NFS, sshfs or even SMB/CIFS. Some people regularly uses this and it works, but it's typically slower and uses more bandwidth. Cheers, Marian On 21.01.2017 23:41, W. Martin Borgert wrote: > Hi, > > I'm trying to backup to a server that does not have borg > installed. My local borg tells me: > > $ borg init ssh://myuser at myserver:myport/home/myuser/myrepo > Remote: bash: borg: command not found > Connection closed by remote host. Is borg working on the server? > > Is this mode of operation supported? > Am I doing something wrong? > > TIA! > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From debacle at debian.org Sat Jan 21 18:13:21 2017 From: debacle at debian.org (W. Martin Borgert) Date: Sun, 22 Jan 2017 00:13:21 +0100 Subject: [Borgbackup] Backup without remote borg installed? In-Reply-To: <6fdf742b-d6c1-946c-0cd1-38f8251cd0bd@enkore.de> References: <20170121224156.mqfa2p5jtp2bk7m6@fama> <6fdf742b-d6c1-946c-0cd1-38f8251cd0bd@enkore.de> Message-ID: <20170121231321.53b6qpc7qf2hf4yx@fama> On 2017-01-21 23:45, Marian Beermann wrote: > in these cases where it is not possible to install Borg at the > repository location, one can use a network FS like NFS, sshfs or even > SMB/CIFS. Some people regularly uses this and it works, but it's > typically slower and uses more bandwidth. Thanks, again! With sshfs, it works fine. As expected, the initial backup is slower than rsync, but significantly faster than obnam. I did not control bandwidth, however. Cheers From devzero at web.de Mon Jan 23 06:39:47 2017 From: devzero at web.de (devzero at web.de) Date: Mon, 23 Jan 2017 12:39:47 +0100 Subject: [Borgbackup] purge not deleting data? Message-ID: Hello, i run a rsync based daily backup where a number of hosts files are being staged to some local filesystem (one subdir per host) and then put into individual borg-repo on a "one-dir-per-host" basis (see script below) every few days, i see that borg prune does not appear to purge data from the repos, and for my curiosity it happens for all repos at the same day/backup-run - though rsync tells it did delete files from the rsync copy. so i wonder under which circumstances borg prune skips deleting files !? as the repos were created at different points in time i cannot explain this to me, maybe someone has a clue how this can be explained or analyzed? regards roland [root at backupvm2]# egrep -Hi "This archive|Deleted" borg_*-*-2017*.err borg_04-01-2017_03-45.err:This archive: 60.42 GB 45.51 GB 536.55 MB borg_04-01-2017_03-45.err:Deleted data: -99.61 GB -82.57 GB -1.32 GB borg_05-01-2017_04-11.err:This archive: 60.39 GB 45.49 GB 1.17 GB borg_05-01-2017_04-11.err:Deleted data: -60.54 GB -45.61 GB -494.64 MB borg_06-01-2017_03-32.err:This archive: 60.37 GB 45.46 GB 1.12 GB borg_06-01-2017_03-32.err:Deleted data: -60.55 GB -45.62 GB -985.93 MB borg_07-01-2017_05-00.err:This archive: 60.40 GB 45.58 GB 1.32 GB borg_07-01-2017_05-00.err:Deleted data: -60.55 GB -45.63 GB -499.40 MB borg_08-01-2017_03-10.err:This archive: 60.40 GB 45.58 GB 536.57 MB borg_08-01-2017_03-10.err:Deleted data: 0 B 0 B 0 B borg_09-01-2017_03-24.err:This archive: 60.39 GB 45.58 GB 589.20 MB borg_09-01-2017_03-24.err:Deleted data: -60.54 GB -45.63 GB -473.20 MB borg_10-01-2017_03-34.err:This archive: 60.39 GB 45.59 GB 1.11 GB borg_10-01-2017_03-34.err:Deleted data: -60.54 GB -45.63 GB -511.70 MB borg_11-01-2017_03-51.err:This archive: 60.46 GB 45.63 GB 910.10 MB borg_11-01-2017_03-51.err:Deleted data: -60.54 GB -45.63 GB -508.64 MB borg_12-01-2017_03-51.err:This archive: 60.46 GB 45.64 GB 1.33 GB borg_12-01-2017_03-51.err:Deleted data: -60.54 GB -45.63 GB -507.77 MB borg_13-01-2017_04-50.err:This archive: 60.55 GB 45.71 GB 1.50 GB borg_13-01-2017_04-50.err:Deleted data: -60.55 GB -45.63 GB -513.74 MB borg_14-01-2017_04-32.err:This archive: 60.56 GB 45.71 GB 926.04 MB borg_14-01-2017_04-32.err:Deleted data: 0 B 0 B 0 B borg_15-01-2017_03-14.err:This archive: 60.54 GB 45.71 GB 565.45 MB borg_15-01-2017_03-14.err:Deleted data: 0 B 0 B 0 B borg_16-01-2017_03-49.err:This archive: 60.54 GB 45.71 GB 529.96 MB borg_16-01-2017_03-49.err:Deleted data: -60.46 GB -45.56 GB -512.22 MB borg_17-01-2017_04-14.err:This archive: 60.56 GB 45.72 GB 1.57 GB borg_17-01-2017_04-14.err:Deleted data: -60.44 GB -45.53 GB -358.81 MB ls -1 */rsync.log |while read line;do echo $line $(grep -i deleting $line|wc -l);done 2017-01-03-2303/rsync.log 599 2017-01-04-2303/rsync.log 622 2017-01-05-2303/rsync.log 2525 2017-01-06-2303/rsync.log 711 2017-01-07-2303/rsync.log 6305 2017-01-08-2303/rsync.log 581 2017-01-09-2303/rsync.log 488 2017-01-10-2303/rsync.log 1605 2017-01-11-2303/rsync.log 738 2017-01-12-2303/rsync.log 1669 2017-01-13-2303/rsync.log 5394 2017-01-14-2303/rsync.log 5056 2017-01-15-2303/rsync.log 478 2017-01-16-2303/rsync.log 487 [root at backupvm2 backup]# cat borg_backup_all.sh #!/bin/bash export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes export BORG_RELOCATED_REPO_ACCESS_IS_OK=yes export BORG_CACHE_DIR=/backup/borg-cache export DATUM=$(date +"%d-%m-%Y_%H-%M") export ARCHIVENAME=archive-$DATUM export BORGBIN="/backup/bin/borg" export BASEPATH="/iscsi/lun1/borg-repos" export BASEPATHLOG="/iscsi/lun2/borg-logs" echo "hostname::archive-name orig-size compr-size dedup-size" for HOSTNAME in $(ls -1r /btrfspool/backup) do if [ ! -f /btrfspool/backup/$HOSTNAME/disabled ] then export REPOPATH="$BASEPATH/$HOSTNAME" export ARCHIVEPATH="$REPOPATH::$ARCHIVENAME" export LOG=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.log export ERR=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.err if [ ! -d $REPOPATH ]; then mkdir $REPOPATH mkdir $BASEPATHLOG/$HOSTNAME $BORGBIN init --encryption=none $REPOPATH fi cd /btrfspool/backup/$HOSTNAME/backup $BORGBIN create --filter=AME --info --list --stats --numeric-owner --compression lz4 $ARCHIVEPATH . >$LOG 2>$ERR echo $HOSTNAME::$ARCHIVENAME $($BORGBIN info $REPOPATH::$ARCHIVENAME |egrep "This archive"|cut -d ":" -f 2-)| /usr/bin/awk '{printf "%-60s %10s %2s %10s %2s %10s %2s\n",$1,$2,$3,$4,$5,$6,$7}' $BORGBIN prune --verbose --stats --keep-daily 14 --keep-weekly 8 --keep-monthly 6 $REPOPATH >>$LOG 2>>$ERR fi done From tw at waldmann-edv.de Mon Jan 23 08:44:28 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 23 Jan 2017 14:44:28 +0100 Subject: [Borgbackup] purge not deleting data? In-Reply-To: References: Message-ID: Hi, > every few days, i see that borg prune does not appear to purge data from the repos, > and for my curiosity it happens for all repos at the same day/backup-run Well, guess you do not create a borg archive every day? E.g. because your cronjob does not run sundays or so? Or, due to other circumstances (e.g. machine powered off) it did not happen to make a backup every day? It could also be just a not-so-obvious effect of implementing your prune policy, like the different parts (daily, weekly, monthly, ...) interfering. > - though rsync tells it did delete files from the rsync copy. That is completely unrelated, it just reduces total size and count of files in your backup data source. > so i wonder under which circumstances borg prune skips deleting files !? borg prune deletes ("thins out") backup archives following the policy you give to it. It just decides which backup archives to keep / not to keep, so the policy you gave is implemented. If you give -v --list, it will spill out the result of that decision. As a consequence of archives getting deleted (if your repo is not append-only) it will also usually reduce your overall repo size IF it decided to delete at least 1 old archive. > as the repos were created at different points in time i cannot explain this to me, maybe someone has a clue how this can be explained or analyzed? The effects of non-trivial purge policies (maybe combined with not making a backup every day) are often a bit hard to explain. A while ago I tried to explain prune policies better and made a graphical example, see there: https://github.com/borgbackup/borg/blob/1.0-maint/docs/misc/prune-example.txt BTW, I've seen you use d-m-y for dates, that is not the best way to do it as it does not sort correctly. Maybe rather use y-m-d, if possible. Cheers, Thomas From devzero at web.de Mon Jan 23 09:43:10 2017 From: devzero at web.de (devzero at web.de) Date: Mon, 23 Jan 2017 15:43:10 +0100 Subject: [Borgbackup] purge not deleting data? In-Reply-To: References: , Message-ID: thanks for your answer/hints > Well, guess you do not create a borg archive every day? i do. cronjob runs every day, also on weekend. first it does rsync to local, then borgbackup is run. rsync does delete data every day, there are servers with significant change every day (gb`s of added and deleted files) i create a repository for every single server. that makes deduplication less efficient, yes, but deduplication benefit on the "multiple servers into one borg repo) does not bring that great benefit like having each server in a dedicated borg repo. dedup on per server basis is sufficient and more safe. what i want is compression and rotation/prune (i.e. incremental forever) and dedup for the data, which is very different on every server anyway... we have the same backup strategy from another location , but with rsync + zfs, but i did not want to rely on zfsonlinux as sole strategy for backup, so i created a second solution working similar, but with borgbackup as the backend instead of zfs + rotating snapshots > A while ago I tried to explain prune policies better and made a > graphical example, see there: > > https://github.com/borgbackup/borg/blob/1.0-maint/docs/misc/prune-example.txt > > BTW, I've seen you use d-m-y for dates, that is not the best way to do > it as it does not sort correctly. Maybe rather use y-m-d, if possible. i will see what i can find and thanks again for the hints. regards roland > Gesendet: Montag, 23. Januar 2017 um 14:44 Uhr > Von: "Thomas Waldmann" > An: borgbackup at python.org > Betreff: Re: [Borgbackup] purge not deleting data? > > Hi, > > > every few days, i see that borg prune does not appear to purge data from the repos, > > and for my curiosity it happens for all repos at the same day/backup-run > > Well, guess you do not create a borg archive every day? > > E.g. because your cronjob does not run sundays or so? > > Or, due to other circumstances (e.g. machine powered off) it did not > happen to make a backup every day? > > It could also be just a not-so-obvious effect of implementing your prune > policy, like the different parts (daily, weekly, monthly, ...) interfering. > > > - though rsync tells it did delete files from the rsync copy. > > That is completely unrelated, it just reduces total size and count of > files in your backup data source. > > > so i wonder under which circumstances borg prune skips deleting files !? > > borg prune deletes ("thins out") backup archives following the policy > you give to it. It just decides which backup archives to keep / not to > keep, so the policy you gave is implemented. If you give -v --list, it > will spill out the result of that decision. > > As a consequence of archives getting deleted (if your repo is not > append-only) it will also usually reduce your overall repo size IF it > decided to delete at least 1 old archive. > > > as the repos were created at different points in time i cannot explain this to me, maybe someone has a clue how this can be explained or analyzed? > > The effects of non-trivial purge policies (maybe combined with not > making a backup every day) are often a bit hard to explain. > > A while ago I tried to explain prune policies better and made a > graphical example, see there: > > https://github.com/borgbackup/borg/blob/1.0-maint/docs/misc/prune-example.txt > > BTW, I've seen you use d-m-y for dates, that is not the best way to do > it as it does not sort correctly. Maybe rather use y-m-d, if possible. > > Cheers, Thomas > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From adrian.klaver at aklaver.com Mon Jan 23 09:59:35 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 23 Jan 2017 06:59:35 -0800 Subject: [Borgbackup] purge not deleting data? In-Reply-To: References: Message-ID: <57449b4b-6579-4742-5dcf-18c81e36b209@aklaver.com> On 01/23/2017 03:39 AM, devzero at web.de wrote: > Hello, > > i run a rsync based daily backup where a number of hosts files are being staged to some local filesystem (one subdir per host) and then put into individual borg-repo on a "one-dir-per-host" basis (see script below) > > every few days, i see that borg prune does not appear to purge data from the repos, and for my curiosity it happens for all repos at the same day/backup-run - though rsync tells it did delete files from the rsync copy. > > so i wonder under which circumstances borg prune skips deleting files !? > > as the repos were created at different points in time i cannot explain this to me, maybe someone has a clue how this can be explained or analyzed? Assuming you are referring to the lines below where Deleted data = 0, then the archives on 8/1/2017 and 15/1/2017 represent the last one of each week and would be saved by the --keep-weekly 8. Not sure about the 14/1/2017 one as that should fall within the --keep-daily 14, unless there was another later archive on that day. How did you determine the Deleted data for each archive? Also I to back Thomas's suggestion, I would use y-m-d dates. You do use that in the rsync output and it would make comparing things easier. Re: the suggestion to use --list, it gives you the day of the week which helps me with seeing what pruning is doing: ----------------------------------------------------------------------------- Original size Compressed size Deduplicated size This archive: 1.33 GB 1.31 GB 89.14 MB All archives: 25.90 GB 24.36 GB 4.15 GB Unique chunks Total chunks Chunk index: 37785 516017 ------------------------------------------------------------------------------ Keeping archive: tito-012217_1900 Sun, 2017-01-22 19:00:03 Keeping archive: tito-012117_1900 Sat, 2017-01-21 19:00:03 Keeping archive: tito-012017_1900 Fri, 2017-01-20 19:00:03 Keeping archive: tito-011917_1900 Thu, 2017-01-19 19:00:03 Keeping archive: tito-011817_1900 Wed, 2017-01-18 19:00:04 Keeping archive: tito-011717_1900 Tue, 2017-01-17 19:00:04 Keeping archive: tito-011617_1900 Mon, 2017-01-16 19:00:04 Keeping archive: tito-011517_1900 Sun, 2017-01-15 19:00:04 Keeping archive: tito-010817_1900 Sun, 2017-01-08 19:00:04 Keeping archive: tito-010117_1900 Sun, 2017-01-01 19:00:05 Keeping archive: tito-123116_1900 Sat, 2016-12-31 19:00:05 Keeping archive: tito-122516_1900 Sun, 2016-12-25 19:00:04 Keeping archive: tito-121816_1900 Sun, 2016-12-18 19:00:03 Pruning archive: tito-011417_1900 Sat, 2017-01-14 19:00:04 > > regards > roland > > > [root at backupvm2]# egrep -Hi "This archive|Deleted" borg_*-*-2017*.err > borg_04-01-2017_03-45.err:This archive: 60.42 GB 45.51 GB 536.55 MB > borg_04-01-2017_03-45.err:Deleted data: -99.61 GB -82.57 GB -1.32 GB > borg_05-01-2017_04-11.err:This archive: 60.39 GB 45.49 GB 1.17 GB > borg_05-01-2017_04-11.err:Deleted data: -60.54 GB -45.61 GB -494.64 MB > borg_06-01-2017_03-32.err:This archive: 60.37 GB 45.46 GB 1.12 GB > borg_06-01-2017_03-32.err:Deleted data: -60.55 GB -45.62 GB -985.93 MB > borg_07-01-2017_05-00.err:This archive: 60.40 GB 45.58 GB 1.32 GB > borg_07-01-2017_05-00.err:Deleted data: -60.55 GB -45.63 GB -499.40 MB > borg_08-01-2017_03-10.err:This archive: 60.40 GB 45.58 GB 536.57 MB > borg_08-01-2017_03-10.err:Deleted data: 0 B 0 B 0 B > borg_09-01-2017_03-24.err:This archive: 60.39 GB 45.58 GB 589.20 MB > borg_09-01-2017_03-24.err:Deleted data: -60.54 GB -45.63 GB -473.20 MB > borg_10-01-2017_03-34.err:This archive: 60.39 GB 45.59 GB 1.11 GB > borg_10-01-2017_03-34.err:Deleted data: -60.54 GB -45.63 GB -511.70 MB > borg_11-01-2017_03-51.err:This archive: 60.46 GB 45.63 GB 910.10 MB > borg_11-01-2017_03-51.err:Deleted data: -60.54 GB -45.63 GB -508.64 MB > borg_12-01-2017_03-51.err:This archive: 60.46 GB 45.64 GB 1.33 GB > borg_12-01-2017_03-51.err:Deleted data: -60.54 GB -45.63 GB -507.77 MB > borg_13-01-2017_04-50.err:This archive: 60.55 GB 45.71 GB 1.50 GB > borg_13-01-2017_04-50.err:Deleted data: -60.55 GB -45.63 GB -513.74 MB > borg_14-01-2017_04-32.err:This archive: 60.56 GB 45.71 GB 926.04 MB > borg_14-01-2017_04-32.err:Deleted data: 0 B 0 B 0 B > borg_15-01-2017_03-14.err:This archive: 60.54 GB 45.71 GB 565.45 MB > borg_15-01-2017_03-14.err:Deleted data: 0 B 0 B 0 B > borg_16-01-2017_03-49.err:This archive: 60.54 GB 45.71 GB 529.96 MB > borg_16-01-2017_03-49.err:Deleted data: -60.46 GB -45.56 GB -512.22 MB > borg_17-01-2017_04-14.err:This archive: 60.56 GB 45.72 GB 1.57 GB > borg_17-01-2017_04-14.err:Deleted data: -60.44 GB -45.53 GB -358.81 MB > > ls -1 */rsync.log |while read line;do echo $line $(grep -i deleting $line|wc -l);done > 2017-01-03-2303/rsync.log 599 > 2017-01-04-2303/rsync.log 622 > 2017-01-05-2303/rsync.log 2525 > 2017-01-06-2303/rsync.log 711 > 2017-01-07-2303/rsync.log 6305 > 2017-01-08-2303/rsync.log 581 > 2017-01-09-2303/rsync.log 488 > 2017-01-10-2303/rsync.log 1605 > 2017-01-11-2303/rsync.log 738 > 2017-01-12-2303/rsync.log 1669 > 2017-01-13-2303/rsync.log 5394 > 2017-01-14-2303/rsync.log 5056 > 2017-01-15-2303/rsync.log 478 > 2017-01-16-2303/rsync.log 487 > > > [root at backupvm2 backup]# cat borg_backup_all.sh > #!/bin/bash > > export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes > export BORG_RELOCATED_REPO_ACCESS_IS_OK=yes > > export BORG_CACHE_DIR=/backup/borg-cache > export DATUM=$(date +"%d-%m-%Y_%H-%M") > export ARCHIVENAME=archive-$DATUM > export BORGBIN="/backup/bin/borg" > export BASEPATH="/iscsi/lun1/borg-repos" > export BASEPATHLOG="/iscsi/lun2/borg-logs" > echo "hostname::archive-name orig-size compr-size dedup-size" > > for HOSTNAME in $(ls -1r /btrfspool/backup) > do > if [ ! -f /btrfspool/backup/$HOSTNAME/disabled ] > then > export REPOPATH="$BASEPATH/$HOSTNAME" > export ARCHIVEPATH="$REPOPATH::$ARCHIVENAME" > export LOG=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.log > export ERR=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.err > > if [ ! -d $REPOPATH ]; then > mkdir $REPOPATH > mkdir $BASEPATHLOG/$HOSTNAME > $BORGBIN init --encryption=none $REPOPATH > fi > > cd /btrfspool/backup/$HOSTNAME/backup > $BORGBIN create --filter=AME --info --list --stats --numeric-owner --compression lz4 $ARCHIVEPATH . >$LOG 2>$ERR > echo $HOSTNAME::$ARCHIVENAME $($BORGBIN info $REPOPATH::$ARCHIVENAME |egrep "This archive"|cut -d ":" -f 2-)| /usr/bin/awk '{printf "%-60s %10s %2s %10s %2s %10s %2s\n",$1,$2,$3,$4,$5,$6,$7}' > $BORGBIN prune --verbose --stats --keep-daily 14 --keep-weekly 8 --keep-monthly 6 $REPOPATH >>$LOG 2>>$ERR > fi > done > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From devzero at web.de Mon Jan 23 11:29:38 2017 From: devzero at web.de (devzero at web.de) Date: Mon, 23 Jan 2017 17:29:38 +0100 Subject: [Borgbackup] purge not deleting data? In-Reply-To: <57449b4b-6579-4742-5dcf-18c81e36b209@aklaver.com> References: , <57449b4b-6579-4742-5dcf-18c81e36b209@aklaver.com> Message-ID: very helpful. thank you! > Gesendet: Montag, 23. Januar 2017 um 15:59 Uhr > Von: "Adrian Klaver" > An: devzero at web.de, borgbackup at python.org > Betreff: Re: [Borgbackup] purge not deleting data? > > On 01/23/2017 03:39 AM, devzero at web.de wrote: > > Hello, > > > > i run a rsync based daily backup where a number of hosts files are being staged to some local filesystem (one subdir per host) and then put into individual borg-repo on a "one-dir-per-host" basis (see script below) > > > > every few days, i see that borg prune does not appear to purge data from the repos, and for my curiosity it happens for all repos at the same day/backup-run - though rsync tells it did delete files from the rsync copy. > > > > so i wonder under which circumstances borg prune skips deleting files !? > > > > as the repos were created at different points in time i cannot explain this to me, maybe someone has a clue how this can be explained or analyzed? > > Assuming you are referring to the lines below where Deleted data = 0, then the archives on > 8/1/2017 and 15/1/2017 represent the last one of each week and would be saved > by the --keep-weekly 8. Not sure about the 14/1/2017 one as that should fall > within the --keep-daily 14, unless there was another later archive on that day. > > How did you determine the Deleted data for each archive? > > > Also I to back Thomas's suggestion, I would use y-m-d dates. You do use that in the > rsync output and it would make comparing things easier. > > Re: the suggestion to use --list, it gives you the day of the week which helps me with > seeing what pruning is doing: > > ----------------------------------------------------------------------------- > Original size Compressed size Deduplicated size > This archive: 1.33 GB 1.31 GB 89.14 MB > All archives: 25.90 GB 24.36 GB 4.15 GB > > Unique chunks Total chunks > Chunk index: 37785 516017 > ------------------------------------------------------------------------------ > Keeping archive: tito-012217_1900 Sun, 2017-01-22 19:00:03 > Keeping archive: tito-012117_1900 Sat, 2017-01-21 19:00:03 > Keeping archive: tito-012017_1900 Fri, 2017-01-20 19:00:03 > Keeping archive: tito-011917_1900 Thu, 2017-01-19 19:00:03 > Keeping archive: tito-011817_1900 Wed, 2017-01-18 19:00:04 > Keeping archive: tito-011717_1900 Tue, 2017-01-17 19:00:04 > Keeping archive: tito-011617_1900 Mon, 2017-01-16 19:00:04 > Keeping archive: tito-011517_1900 Sun, 2017-01-15 19:00:04 > Keeping archive: tito-010817_1900 Sun, 2017-01-08 19:00:04 > Keeping archive: tito-010117_1900 Sun, 2017-01-01 19:00:05 > Keeping archive: tito-123116_1900 Sat, 2016-12-31 19:00:05 > Keeping archive: tito-122516_1900 Sun, 2016-12-25 19:00:04 > Keeping archive: tito-121816_1900 Sun, 2016-12-18 19:00:03 > Pruning archive: tito-011417_1900 Sat, 2017-01-14 19:00:04 > > > > > regards > > roland > > > > > > [root at backupvm2]# egrep -Hi "This archive|Deleted" borg_*-*-2017*.err > > borg_04-01-2017_03-45.err:This archive: 60.42 GB 45.51 GB 536.55 MB > > borg_04-01-2017_03-45.err:Deleted data: -99.61 GB -82.57 GB -1.32 GB > > borg_05-01-2017_04-11.err:This archive: 60.39 GB 45.49 GB 1.17 GB > > borg_05-01-2017_04-11.err:Deleted data: -60.54 GB -45.61 GB -494.64 MB > > borg_06-01-2017_03-32.err:This archive: 60.37 GB 45.46 GB 1.12 GB > > borg_06-01-2017_03-32.err:Deleted data: -60.55 GB -45.62 GB -985.93 MB > > borg_07-01-2017_05-00.err:This archive: 60.40 GB 45.58 GB 1.32 GB > > borg_07-01-2017_05-00.err:Deleted data: -60.55 GB -45.63 GB -499.40 MB > > borg_08-01-2017_03-10.err:This archive: 60.40 GB 45.58 GB 536.57 MB > > borg_08-01-2017_03-10.err:Deleted data: 0 B 0 B 0 B > > borg_09-01-2017_03-24.err:This archive: 60.39 GB 45.58 GB 589.20 MB > > borg_09-01-2017_03-24.err:Deleted data: -60.54 GB -45.63 GB -473.20 MB > > borg_10-01-2017_03-34.err:This archive: 60.39 GB 45.59 GB 1.11 GB > > borg_10-01-2017_03-34.err:Deleted data: -60.54 GB -45.63 GB -511.70 MB > > borg_11-01-2017_03-51.err:This archive: 60.46 GB 45.63 GB 910.10 MB > > borg_11-01-2017_03-51.err:Deleted data: -60.54 GB -45.63 GB -508.64 MB > > borg_12-01-2017_03-51.err:This archive: 60.46 GB 45.64 GB 1.33 GB > > borg_12-01-2017_03-51.err:Deleted data: -60.54 GB -45.63 GB -507.77 MB > > borg_13-01-2017_04-50.err:This archive: 60.55 GB 45.71 GB 1.50 GB > > borg_13-01-2017_04-50.err:Deleted data: -60.55 GB -45.63 GB -513.74 MB > > borg_14-01-2017_04-32.err:This archive: 60.56 GB 45.71 GB 926.04 MB > > borg_14-01-2017_04-32.err:Deleted data: 0 B 0 B 0 B > > borg_15-01-2017_03-14.err:This archive: 60.54 GB 45.71 GB 565.45 MB > > borg_15-01-2017_03-14.err:Deleted data: 0 B 0 B 0 B > > borg_16-01-2017_03-49.err:This archive: 60.54 GB 45.71 GB 529.96 MB > > borg_16-01-2017_03-49.err:Deleted data: -60.46 GB -45.56 GB -512.22 MB > > borg_17-01-2017_04-14.err:This archive: 60.56 GB 45.72 GB 1.57 GB > > borg_17-01-2017_04-14.err:Deleted data: -60.44 GB -45.53 GB -358.81 MB > > > > ls -1 */rsync.log |while read line;do echo $line $(grep -i deleting $line|wc -l);done > > 2017-01-03-2303/rsync.log 599 > > 2017-01-04-2303/rsync.log 622 > > 2017-01-05-2303/rsync.log 2525 > > 2017-01-06-2303/rsync.log 711 > > 2017-01-07-2303/rsync.log 6305 > > 2017-01-08-2303/rsync.log 581 > > 2017-01-09-2303/rsync.log 488 > > 2017-01-10-2303/rsync.log 1605 > > 2017-01-11-2303/rsync.log 738 > > 2017-01-12-2303/rsync.log 1669 > > 2017-01-13-2303/rsync.log 5394 > > 2017-01-14-2303/rsync.log 5056 > > 2017-01-15-2303/rsync.log 478 > > 2017-01-16-2303/rsync.log 487 > > > > > > [root at backupvm2 backup]# cat borg_backup_all.sh > > #!/bin/bash > > > > export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes > > export BORG_RELOCATED_REPO_ACCESS_IS_OK=yes > > > > export BORG_CACHE_DIR=/backup/borg-cache > > export DATUM=$(date +"%d-%m-%Y_%H-%M") > > export ARCHIVENAME=archive-$DATUM > > export BORGBIN="/backup/bin/borg" > > export BASEPATH="/iscsi/lun1/borg-repos" > > export BASEPATHLOG="/iscsi/lun2/borg-logs" > > echo "hostname::archive-name orig-size compr-size dedup-size" > > > > for HOSTNAME in $(ls -1r /btrfspool/backup) > > do > > if [ ! -f /btrfspool/backup/$HOSTNAME/disabled ] > > then > > export REPOPATH="$BASEPATH/$HOSTNAME" > > export ARCHIVEPATH="$REPOPATH::$ARCHIVENAME" > > export LOG=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.log > > export ERR=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.err > > > > if [ ! -d $REPOPATH ]; then > > mkdir $REPOPATH > > mkdir $BASEPATHLOG/$HOSTNAME > > $BORGBIN init --encryption=none $REPOPATH > > fi > > > > cd /btrfspool/backup/$HOSTNAME/backup > > $BORGBIN create --filter=AME --info --list --stats --numeric-owner --compression lz4 $ARCHIVEPATH . >$LOG 2>$ERR > > echo $HOSTNAME::$ARCHIVENAME $($BORGBIN info $REPOPATH::$ARCHIVENAME |egrep "This archive"|cut -d ":" -f 2-)| /usr/bin/awk '{printf "%-60s %10s %2s %10s %2s %10s %2s\n",$1,$2,$3,$4,$5,$6,$7}' > > $BORGBIN prune --verbose --stats --keep-daily 14 --keep-weekly 8 --keep-monthly 6 $REPOPATH >>$LOG 2>>$ERR > > fi > > done > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > > > > > -- > Adrian Klaver > adrian.klaver at aklaver.com > From fabio.pedretti at unibs.it Wed Jan 25 12:09:37 2017 From: fabio.pedretti at unibs.it (Fabio Pedretti) Date: Wed, 25 Jan 2017 18:09:37 +0100 Subject: [Borgbackup] purge not deleting data? In-Reply-To: References: Message-ID: Hi, I suggest using prune before create, so that prune always start at the same time and is not impacted by the time the create requires. This way you'll get consistent archive deletion. Another issue that may alter timing of prune is this issue when older archivers cross a daylight saving time change: https://github.com/borgbackup/borg/issues/1980 2017-01-23 12:39 GMT+01:00 : > Hello, > > i run a rsync based daily backup where a number of hosts files are being > staged to some local filesystem (one subdir per host) and then put into > individual borg-repo on a "one-dir-per-host" basis (see script below) > > every few days, i see that borg prune does not appear to purge data from > the repos, and for my curiosity it happens for all repos at the same > day/backup-run - though rsync tells it did delete files from the rsync copy. > > so i wonder under which circumstances borg prune skips deleting files !? > > as the repos were created at different points in time i cannot explain > this to me, maybe someone has a clue how this can be explained or analyzed? > > regards > roland > > > [root at backupvm2]# egrep -Hi "This archive|Deleted" borg_*-*-2017*.err > borg_04-01-2017_03-45.err:This archive: 60.42 GB > 45.51 GB 536.55 MB > borg_04-01-2017_03-45.err:Deleted data: -99.61 GB > -82.57 GB -1.32 GB > borg_05-01-2017_04-11.err:This archive: 60.39 GB > 45.49 GB 1.17 GB > borg_05-01-2017_04-11.err:Deleted data: -60.54 GB > -45.61 GB -494.64 MB > borg_06-01-2017_03-32.err:This archive: 60.37 GB > 45.46 GB 1.12 GB > borg_06-01-2017_03-32.err:Deleted data: -60.55 GB > -45.62 GB -985.93 MB > borg_07-01-2017_05-00.err:This archive: 60.40 GB > 45.58 GB 1.32 GB > borg_07-01-2017_05-00.err:Deleted data: -60.55 GB > -45.63 GB -499.40 MB > borg_08-01-2017_03-10.err:This archive: 60.40 GB > 45.58 GB 536.57 MB > borg_08-01-2017_03-10.err:Deleted data: 0 B > 0 B 0 B > borg_09-01-2017_03-24.err:This archive: 60.39 GB > 45.58 GB 589.20 MB > borg_09-01-2017_03-24.err:Deleted data: -60.54 GB > -45.63 GB -473.20 MB > borg_10-01-2017_03-34.err:This archive: 60.39 GB > 45.59 GB 1.11 GB > borg_10-01-2017_03-34.err:Deleted data: -60.54 GB > -45.63 GB -511.70 MB > borg_11-01-2017_03-51.err:This archive: 60.46 GB > 45.63 GB 910.10 MB > borg_11-01-2017_03-51.err:Deleted data: -60.54 GB > -45.63 GB -508.64 MB > borg_12-01-2017_03-51.err:This archive: 60.46 GB > 45.64 GB 1.33 GB > borg_12-01-2017_03-51.err:Deleted data: -60.54 GB > -45.63 GB -507.77 MB > borg_13-01-2017_04-50.err:This archive: 60.55 GB > 45.71 GB 1.50 GB > borg_13-01-2017_04-50.err:Deleted data: -60.55 GB > -45.63 GB -513.74 MB > borg_14-01-2017_04-32.err:This archive: 60.56 GB > 45.71 GB 926.04 MB > borg_14-01-2017_04-32.err:Deleted data: 0 B > 0 B 0 B > borg_15-01-2017_03-14.err:This archive: 60.54 GB > 45.71 GB 565.45 MB > borg_15-01-2017_03-14.err:Deleted data: 0 B > 0 B 0 B > borg_16-01-2017_03-49.err:This archive: 60.54 GB > 45.71 GB 529.96 MB > borg_16-01-2017_03-49.err:Deleted data: -60.46 GB > -45.56 GB -512.22 MB > borg_17-01-2017_04-14.err:This archive: 60.56 GB > 45.72 GB 1.57 GB > borg_17-01-2017_04-14.err:Deleted data: -60.44 GB > -45.53 GB -358.81 MB > > ls -1 */rsync.log |while read line;do echo $line $(grep -i deleting > $line|wc -l);done > 2017-01-03-2303/rsync.log 599 > 2017-01-04-2303/rsync.log 622 > 2017-01-05-2303/rsync.log 2525 > 2017-01-06-2303/rsync.log 711 > 2017-01-07-2303/rsync.log 6305 > 2017-01-08-2303/rsync.log 581 > 2017-01-09-2303/rsync.log 488 > 2017-01-10-2303/rsync.log 1605 > 2017-01-11-2303/rsync.log 738 > 2017-01-12-2303/rsync.log 1669 > 2017-01-13-2303/rsync.log 5394 > 2017-01-14-2303/rsync.log 5056 > 2017-01-15-2303/rsync.log 478 > 2017-01-16-2303/rsync.log 487 > > > [root at backupvm2 backup]# cat borg_backup_all.sh > #!/bin/bash > > export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes > export BORG_RELOCATED_REPO_ACCESS_IS_OK=yes > > export BORG_CACHE_DIR=/backup/borg-cache > export DATUM=$(date +"%d-%m-%Y_%H-%M") > export ARCHIVENAME=archive-$DATUM > export BORGBIN="/backup/bin/borg" > export BASEPATH="/iscsi/lun1/borg-repos" > export BASEPATHLOG="/iscsi/lun2/borg-logs" > echo "hostname::archive-name > orig-size compr-size dedup-size" > > for HOSTNAME in $(ls -1r /btrfspool/backup) > do > if [ ! -f /btrfspool/backup/$HOSTNAME/disabled ] > then > export REPOPATH="$BASEPATH/$HOSTNAME" > export ARCHIVEPATH="$REPOPATH::$ARCHIVENAME" > export LOG=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.log > export ERR=$BASEPATHLOG/$HOSTNAME/borg_$DATUM.err > > if [ ! -d $REPOPATH ]; then > mkdir $REPOPATH > mkdir $BASEPATHLOG/$HOSTNAME > $BORGBIN init --encryption=none $REPOPATH > fi > > cd /btrfspool/backup/$HOSTNAME/backup > $BORGBIN create --filter=AME --info --list --stats --numeric-owner > --compression lz4 $ARCHIVEPATH . >$LOG 2>$ERR > echo $HOSTNAME::$ARCHIVENAME $($BORGBIN info $REPOPATH::$ARCHIVENAME > |egrep "This archive"|cut -d ":" -f 2-)| /usr/bin/awk '{printf "%-60s %10s > %2s %10s %2s %10s %2s\n",$1,$2,$3,$4,$5,$6,$7}' > $BORGBIN prune --verbose --stats --keep-daily 14 --keep-weekly 8 > --keep-monthly 6 $REPOPATH >>$LOG 2>>$ERR > fi > done > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- ing. Pedretti Fabio Responsabile U.O.C. "Reti e Sistemi" http://www.unibs.it/organizzazione/amministrazione-centrale/servizio-servizi-ict/uoc-reti-e-sistemi Universit? degli Studi di Brescia Via Valotti, 9 - 25121 Brescia E-mail: fabio.pedretti at unibs.it -- Informativa sulla Privacy: http://www.unibs.it/node/8155 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Sat Jan 28 21:49:16 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 29 Jan 2017 03:49:16 +0100 Subject: [Borgbackup] borgbackup release candidate 1.0.10rc1 released Message-ID: https://github.com/borgbackup/borg/releases/tag/1.0.10rc1 Please help testing! More details: see URL above. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From billy at worldofbilly.com Sun Jan 29 16:43:37 2017 From: billy at worldofbilly.com (Billy Charlton) Date: Sun, 29 Jan 2017 13:43:37 -0800 Subject: [Borgbackup] borgbackup release candidate 1.0.10rc1 released In-Reply-To: References: Message-ID: I created an unofficial Windows installer for the 1.0.10rc1, if anyone wants to test. * https://github.com/billyc/borg-releases * GPG key fingerprint: EC2B 7E69 BDA3 F260 8396 3E41 40ED 1F77 9784 BBF0 On Sat, Jan 28, 2017 at 6:49 PM, Thomas Waldmann wrote: > https://github.com/borgbackup/borg/releases/tag/1.0.10rc1 > > Please help testing! > > More details: see URL above. > > Cheers, > > Thomas > > -- > > GPG ID: 9F88FB52FAF7B393 > GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From devzero at web.de Mon Jan 30 10:41:42 2017 From: devzero at web.de (devzero at web.de) Date: Mon, 30 Jan 2017 16:41:42 +0100 Subject: [Borgbackup] borgbackup release candidate 1.0.10rc1 released In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From maurice.libes at osupytheas.fr Mon Feb 6 13:38:40 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Mon, 6 Feb 2017 19:38:40 +0100 Subject: [Borgbackup] understanding prune work Message-ID: <7c7dc2d2-4531-e6a9-7351-13e2784be2c4@osupytheas.fr> hi to all new to borgbackup since a few weeks, I make some test on my own data and PC before to set in production I make a backup by day, here below but when I say to only keep 7 days, I dont understand why borg want to prune the backup of pcml-2017-01-28 and keeping older one as pcml-2017-01-15 pcml-2017-01-22 ? thanks for explanation, maybe I have set some trouble during multiples tests ML $ borg prune --list --info --dry-run --keep-daily=7 --keep-weekly=3 borg at borgserver.myuniv.fr:/mnt/provigo-borg/sauve-pcml --prefix "pcml" Keeping archive: pcml-2017-02-06 Mon, 2017-02-06 02:00:26 Keeping archive: pcml-2017-02-05 Sun, 2017-02-05 02:00:37 Keeping archive: pcml-2017-02-04 Sat, 2017-02-04 02:00:07 Keeping archive: pcml-2017-02-03 Fri, 2017-02-03 11:49:57 Keeping archive: pcml-2017-01-31 Tue, 2017-01-31 02:00:09 Keeping archive: pcml-2017-01-30 Mon, 2017-01-30 02:00:07 Keeping archive: pcml-2017-01-29 Sun, 2017-01-29 02:00:15 Keeping archive: pcml-2017-01-22 Sun, 2017-01-22 02:00:13 Keeping archive: pcml-2017-01-15 Sun, 2017-01-15 02:00:09 Would prune: pcml-2017-01-28 Sat, 2017-01-28 02:00:07 borg list borg at 139.124.2.149:/mnt/provigo-borg/sauve-pcml pcml-2017-01-15 Sun, 2017-01-15 02:00:09 pcml-2017-01-22 Sun, 2017-01-22 02:00:13 pcml-2017-01-28 Sat, 2017-01-28 02:00:07 pcml-2017-01-29 Sun, 2017-01-29 02:00:15 pcml-2017-01-30 Mon, 2017-01-30 02:00:07 pcml-2017-01-31 Tue, 2017-01-31 02:00:09 pcml-2017-02-03 Fri, 2017-02-03 11:49:57 pcml-2017-02-04 Sat, 2017-02-04 02:00:07 pcml-2017-02-05 Sun, 2017-02-05 02:00:37 pcml-2017-02-06 Mon, 2017-02-06 02:00:26 -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 From adrian.klaver at aklaver.com Mon Feb 6 16:15:51 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Mon, 6 Feb 2017 13:15:51 -0800 Subject: [Borgbackup] understanding prune work In-Reply-To: <7c7dc2d2-4531-e6a9-7351-13e2784be2c4@osupytheas.fr> References: <7c7dc2d2-4531-e6a9-7351-13e2784be2c4@osupytheas.fr> Message-ID: On 02/06/2017 10:38 AM, Maurice Libes wrote: > hi to all > new to borgbackup since a few weeks, > I make some test on my own data and PC before to set in production > > I make a backup by day, here below > but when I say to only keep 7 days, I dont understand why borg want to > prune the backup of pcml-2017-01-28 and keeping older one as > pcml-2017-01-15 pcml-2017-01-22 ? Because you have --keep-weekly=3. This will keep the last backup of each week going back 3 weeks. pcml-2017-01-15 and pcml-2017-01-22 are on Sundays which is the last day of an ISO week, so borgbackup is doing what you want. > > thanks for explanation, maybe I have set some trouble during multiples > tests > > ML > > $ borg prune --list --info --dry-run --keep-daily=7 --keep-weekly=3 > borg at borgserver.myuniv.fr:/mnt/provigo-borg/sauve-pcml --prefix "pcml" > > Keeping archive: pcml-2017-02-06 Mon, 2017-02-06 > 02:00:26 > > Keeping archive: pcml-2017-02-05 Sun, 2017-02-05 > 02:00:37 > > Keeping archive: pcml-2017-02-04 Sat, 2017-02-04 > 02:00:07 > > Keeping archive: pcml-2017-02-03 Fri, 2017-02-03 > 11:49:57 > > Keeping archive: pcml-2017-01-31 Tue, 2017-01-31 > 02:00:09 > > Keeping archive: pcml-2017-01-30 Mon, 2017-01-30 > 02:00:07 > > Keeping archive: pcml-2017-01-29 Sun, 2017-01-29 > 02:00:15 > > Keeping archive: pcml-2017-01-22 Sun, 2017-01-22 > 02:00:13 > > Keeping archive: pcml-2017-01-15 Sun, 2017-01-15 > 02:00:09 > > Would prune: pcml-2017-01-28 Sat, 2017-01-28 > 02:00:07 > > borg list borg at 139.124.2.149:/mnt/provigo-borg/sauve-pcml > > pcml-2017-01-15 Sun, 2017-01-15 02:00:09 > > pcml-2017-01-22 Sun, 2017-01-22 02:00:13 > > pcml-2017-01-28 Sat, 2017-01-28 02:00:07 > > pcml-2017-01-29 Sun, 2017-01-29 02:00:15 > > pcml-2017-01-30 Mon, 2017-01-30 02:00:07 > > pcml-2017-01-31 Tue, 2017-01-31 02:00:09 > > pcml-2017-02-03 Fri, 2017-02-03 11:49:57 > > pcml-2017-02-04 Sat, 2017-02-04 02:00:07 > > pcml-2017-02-05 Sun, 2017-02-05 02:00:37 > > pcml-2017-02-06 Mon, 2017-02-06 02:00:26 > > > -- Adrian Klaver adrian.klaver at aklaver.com From maurice.libes at osupytheas.fr Tue Feb 7 03:31:41 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Tue, 7 Feb 2017 09:31:41 +0100 Subject: [Borgbackup] understanding prune work In-Reply-To: References: <7c7dc2d2-4531-e6a9-7351-13e2784be2c4@osupytheas.fr> Message-ID: <03799637-85d1-bef5-146b-0274daae4221@osupytheas.fr> Le 06/02/2017 ? 22:15, Adrian Klaver a ?crit : > On 02/06/2017 10:38 AM, Maurice Libes wrote: >> hi to all >> new to borgbackup since a few weeks, >> I make some test on my own data and PC before to set in production >> >> I make a backup by day, here below >> but when I say to only keep 7 days, I dont understand why borg want to >> prune the backup of pcml-2017-01-28 and keeping older one as >> pcml-2017-01-15 pcml-2017-01-22 ? > > Because you have --keep-weekly=3. This will keep the last backup of > each week going back 3 weeks. pcml-2017-01-15 and pcml-2017-01-22 are > on Sundays which is the last day of an ISO week, so borgbackup is > doing what you want. ok understood I didn't see the things like that ML > >> >> thanks for explanation, maybe I have set some trouble during multiples >> tests >> >> ML >> >> $ borg prune --list --info --dry-run --keep-daily=7 --keep-weekly=3 >> borg at borgserver.myuniv.fr:/mnt/provigo-borg/sauve-pcml --prefix "pcml" >> >> Keeping archive: pcml-2017-02-06 Mon, 2017-02-06 >> 02:00:26 >> >> Keeping archive: pcml-2017-02-05 Sun, 2017-02-05 >> 02:00:37 >> >> Keeping archive: pcml-2017-02-04 Sat, 2017-02-04 >> 02:00:07 >> >> Keeping archive: pcml-2017-02-03 Fri, 2017-02-03 >> 11:49:57 >> >> Keeping archive: pcml-2017-01-31 Tue, 2017-01-31 >> 02:00:09 >> >> Keeping archive: pcml-2017-01-30 Mon, 2017-01-30 >> 02:00:07 >> >> Keeping archive: pcml-2017-01-29 Sun, 2017-01-29 >> 02:00:15 >> >> Keeping archive: pcml-2017-01-22 Sun, 2017-01-22 >> 02:00:13 >> >> Keeping archive: pcml-2017-01-15 Sun, 2017-01-15 >> 02:00:09 >> >> Would prune: pcml-2017-01-28 Sat, 2017-01-28 >> 02:00:07 >> >> borg list borg at 139.124.2.149:/mnt/provigo-borg/sauve-pcml >> >> pcml-2017-01-15 Sun, 2017-01-15 02:00:09 >> >> pcml-2017-01-22 Sun, 2017-01-22 02:00:13 >> >> pcml-2017-01-28 Sat, 2017-01-28 02:00:07 >> >> pcml-2017-01-29 Sun, 2017-01-29 02:00:15 >> >> pcml-2017-01-30 Mon, 2017-01-30 02:00:07 >> >> pcml-2017-01-31 Tue, 2017-01-31 02:00:09 >> >> pcml-2017-02-03 Fri, 2017-02-03 11:49:57 >> >> pcml-2017-02-04 Sat, 2017-02-04 02:00:07 >> >> pcml-2017-02-05 Sun, 2017-02-05 02:00:37 >> >> pcml-2017-02-06 Mon, 2017-02-06 02:00:26 >> >> >> > > -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 From adrian.klaver at aklaver.com Tue Feb 7 10:49:25 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Tue, 7 Feb 2017 07:49:25 -0800 Subject: [Borgbackup] understanding prune work In-Reply-To: <03799637-85d1-bef5-146b-0274daae4221@osupytheas.fr> References: <7c7dc2d2-4531-e6a9-7351-13e2784be2c4@osupytheas.fr> <03799637-85d1-bef5-146b-0274daae4221@osupytheas.fr> Message-ID: <939301be-b255-65ad-13da-eef9e12ed5b6@aklaver.com> On 02/07/2017 12:31 AM, Maurice Libes wrote: > > > Le 06/02/2017 ? 22:15, Adrian Klaver a ?crit : >> On 02/06/2017 10:38 AM, Maurice Libes wrote: >>> hi to all >>> new to borgbackup since a few weeks, >>> I make some test on my own data and PC before to set in production >>> >>> I make a backup by day, here below >>> but when I say to only keep 7 days, I dont understand why borg want to >>> prune the backup of pcml-2017-01-28 and keeping older one as >>> pcml-2017-01-15 pcml-2017-01-22 ? >> >> Because you have --keep-weekly=3. This will keep the last backup of >> each week going back 3 weeks. pcml-2017-01-15 and pcml-2017-01-22 are >> on Sundays which is the last day of an ISO week, so borgbackup is >> doing what you want. > ok understood > I didn't see the things like that How did you see it? FYI, an explanation can be found here: http://borgbackup.readthedocs.io/en/stable/usage.html#borg-prune > > ML >> -- Adrian Klaver adrian.klaver at aklaver.com From eric at in3x.io Thu Feb 9 11:04:45 2017 From: eric at in3x.io (Eric S. Johansson) Date: Thu, 9 Feb 2017 11:04:45 -0500 Subject: [Borgbackup] Duplicating repository and making it independent from the parent Message-ID: <76c2faaf-e5ad-9179-2a7d-1b4ca6f338bc@in3x.io> I hope I didn't miss an FAQ on this topic. I've been running Borg internally for a while and like many other people, think it's great. Now it's time to move data off-site and I tried just replicating the in-house repository with rsync. Once I replicated the original data, I started updating both repositories independently. . Apparently this is not a good process because I got a warning about problems with the cache on the local server. What's the best way to replicate an existing backup and make it independent from the parent source? The main reason I'm doing this is that I want to capture all of the data changes currently preserved in the local Borg repository. -- Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 From adrian.klaver at aklaver.com Thu Feb 9 11:57:59 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Thu, 9 Feb 2017 08:57:59 -0800 Subject: [Borgbackup] Duplicating repository and making it independent from the parent In-Reply-To: <76c2faaf-e5ad-9179-2a7d-1b4ca6f338bc@in3x.io> References: <76c2faaf-e5ad-9179-2a7d-1b4ca6f338bc@in3x.io> Message-ID: <01ee18c6-b919-d394-f53d-95edb451e77b@aklaver.com> On 02/09/2017 08:04 AM, Eric S. Johansson wrote: > I hope I didn't miss an FAQ on this topic. > > I've been running Borg internally for a while and like many other > people, think it's great. Now it's time to move data off-site and I > tried just replicating the in-house repository with rsync. Once I > replicated the original data, I started updating both repositories > independently. . Apparently this is not a good process because I got a > warning about problems with the cache on the local server. http://borgbackup.readthedocs.io/en/stable/faq.html#can-i-backup-from-multiple-servers-into-a-single-repository http://borgbackup.readthedocs.io/en/stable/faq.html#can-i-copy-or-synchronize-my-repo-to-another-location > > What's the best way to replicate an existing backup and make it > independent from the parent source? The main reason I'm doing this is > that I want to capture all of the data changes currently preserved in > the local Borg repository. > > -- Adrian Klaver adrian.klaver at aklaver.com From eric at in3x.io Thu Feb 9 12:54:47 2017 From: eric at in3x.io (Eric S. Johansson) Date: Thu, 9 Feb 2017 12:54:47 -0500 Subject: [Borgbackup] Duplicating repository and making it independent from the parent In-Reply-To: <01ee18c6-b919-d394-f53d-95edb451e77b@aklaver.com> References: <76c2faaf-e5ad-9179-2a7d-1b4ca6f338bc@in3x.io> <01ee18c6-b919-d394-f53d-95edb451e77b@aklaver.com> Message-ID: On 2/9/2017 11:57 AM, Adrian Klaver wrote: > On 02/09/2017 08:04 AM, Eric S. Johansson wrote: >> I hope I didn't miss an FAQ on this topic. >> >> I've been running Borg internally for a while and like many other >> people, think it's great. Now it's time to move data off-site and I >> tried just replicating the in-house repository with rsync. Once I >> replicated the original data, I started updating both repositories >> independently. . Apparently this is not a good process because I got a >> warning about problems with the cache on the local server. > > http://borgbackup.readthedocs.io/en/stable/faq.html#can-i-backup-from-multiple-servers-into-a-single-repository > > > http://borgbackup.readthedocs.io/en/stable/faq.html#can-i-copy-or-synchronize-my-repo-to-another-location > > I was afraid I didn't explain it properly. Let me try again: For a few weeks I backed up local data into local repository. Using rsync, I replicated the repository to a remote machine. Now I have two repositories local repository and remote repository. client machine ?borg create?> repo1 repo1 ?copy?> repo2 I ran Borg to update the remote repository and I was told that the remote repository originally belonged to the local address and I was given the option of reassigning it to the remote address. When I updated the local repository, I was told that belong to the remote address and given the option of reassigning it to the local address. When I did that, I was told the cache was newer indicating something bad happened. client machine ?borg create?> repo2 # Got warning here about repository belonging to a local location client machine ?borg create?> repo1 # Got warning here about repository belonging to remote (repo2) location and was told that the cache was newer than the repository. I'm redoing the replication to make sure I didn't mess things up. I'll let you know if was able to reproduce the problem or not. From heiko.helmle at horiba.com Fri Feb 10 08:17:09 2017 From: heiko.helmle at horiba.com (heiko.helmle at horiba.com) Date: Fri, 10 Feb 2017 14:17:09 +0100 Subject: [Borgbackup] borg cutting off last letter of host for remote repos Message-ID: Hello List, this is a strange issue - i get this only on one system (yet). (SLES 12 SP2) I'm using borg 1.0.9, the "all in one" binary from the github page and it seems to cut off the last letter from the hostname... # borg init ssh://testname Remote: ssh: Could not resolve hostname testnam: Name or service not known only borg seems to do this (ssh straight to the machine works from the same prompt). adding a space to the end fixes the name: (well obviously testname doesn't exist, but it's looking up the right name now) # borg init "ssh://testname " Remote: ssh: Could not resolve hostname testname: Name or service not known Any clues where I should start to dig? Best Regards Heiko -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri Feb 10 08:49:59 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 10 Feb 2017 14:49:59 +0100 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: References: Message-ID: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> > this is a strange issue - i get this only on one system (yet). (SLES 12 > SP2) How is that system different from your other systems (on which you use the same binary)? > I'm using borg 1.0.9, the "all in one" binary from the github page and > it seems to cut off the last letter from the hostname... Give the sha256sum of the binary you use, please. That might be a pyinstaller bootloader issue. Can you try 1.0.10rc1 ? https://github.com/borgbackup/borg/releases/tag/1.0.10rc1 Try the normal linux64 binary as well as the pyi321-debug one. > adding a space to the end fixes the name: (well obviously testname > doesn't exist, but it's looking up the right name now) Sounds like an off-by-one error. Just strange this did not surface before. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From heiko.helmle at horiba.com Fri Feb 10 09:08:58 2017 From: heiko.helmle at horiba.com (heiko.helmle at horiba.com) Date: Fri, 10 Feb 2017 15:08:58 +0100 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> References: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> Message-ID: "Borgbackup" wrote on 10.02.2017 14:49:59: > > this is a strange issue - i get this only on one system (yet). (SLES 12 > > SP2) > How is that system different from your other systems (on which you use > the same binary)? this is the first SLES12SP2 System I tried. The others were CentOS, Ubuntu and Debian. > > I'm using borg 1.0.9, the "all in one" binary from the github page and > > it seems to cut off the last letter from the hostname... > Give the sha256sum of the binary you use, please. # sha256sum /usr/local/bin/borg-linux64-1.0.9 9fa23310aa8b08a5d7427970250d9ec0f9512b03bbac32659079df8ae1031764 /usr/local/bin/borg-linux64-1.0.9 there's a symlink to this binary in /usr/local/bin/borg > That might be a pyinstaller bootloader issue. > Can you try 1.0.10rc1 ? tried and that one's better: # borg-linux64-1.0.10rc1 init ssh://testmachine usage: borg-linux64-1.0.10rc1 init [-h] [--critical] [--error] [--warning] [--info] [--debug] [--lock-wait N] [--show-rc] [--no-files-cache] [--umask M] [--remote-path PATH] [-e {none,keyfile,repokey}] [-a] [REPOSITORY] borg-linux64-1.0.10rc1 init: error: argument REPOSITORY: Invalid location format: "ssh://testmachine" > Try the normal linux64 binary as well as the pyi321-debug one. The py321-debug one does it right too. > > adding a space to the end fixes the name: (well obviously testname > > doesn't exist, but it's looking up the right name now) > > Sounds like an off-by-one error. Just strange this did not surface before. Yes - I'm also confused - I used borg on quite a lot of systems and this is the first that's having this problem. Maybe some environment variable influencing this that's only set on this system? I didn't find a clue yet... Best Regards Heiko Helmle -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Fri Feb 10 09:22:21 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 10 Feb 2017 15:22:21 +0100 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: References: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> Message-ID: <73b1a563-a7ae-c2d3-824d-327588a2e34a@waldmann-edv.de> Filed as: https://github.com/borgbackup/borg/issues/2140 If you have a github account, you can also add more infos there. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From adrian.klaver at aklaver.com Fri Feb 10 09:22:44 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 10 Feb 2017 06:22:44 -0800 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: References: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> Message-ID: <3999d20c-7a7c-b76c-dffe-e52d0fc5a315@aklaver.com> On 02/10/2017 06:08 AM, heiko.helmle at horiba.com wrote: > "Borgbackup" > wrote on 10.02.2017 14:49:59: > >> > this is a strange issue - i get this only on one system (yet). (SLES 12 >> > SP2) >> How is that system different from your other systems (on which you use >> the same binary)? > > this is the first SLES12SP2 System I tried. The others were CentOS, > Ubuntu and Debian. Hmm, might be a SUSE thing. I just tried on my openSUSE Leap 42.2: aklaver at tito:~> borg init ssh://arkansas Remote: ssh: Could not resolve hostname arkansa: Name or service not known Cheating works: aklaver at tito:~> borg init --remote-path /home/aklaver/bin/borg ssh://arkansass Enter new passphrase: Or specifying IP: aklaver at tito:~> borg_new init ssh://aklaver at xxx.xxx.xxx.xx The authenticity of host 'xxx.xxx.xxx.x (xxx.xxx.xxx.x)' can't be established. > >> > I'm using borg 1.0.9, the "all in one" binary from the github page and >> > it seems to cut off the last letter from the hostname... >> Give the sha256sum of the binary you use, please. > > # sha256sum /usr/local/bin/borg-linux64-1.0.9 > 9fa23310aa8b08a5d7427970250d9ec0f9512b03bbac32659079df8ae1031764 > /usr/local/bin/borg-linux64-1.0.9 > > there's a symlink to this binary in /usr/local/bin/borg > >> That might be a pyinstaller bootloader issue. >> Can you try 1.0.10rc1 ? > > tried and that one's better: > # borg-linux64-1.0.10rc1 init ssh://testmachine > usage: borg-linux64-1.0.10rc1 init [-h] [--critical] [--error] [--warning] > [--info] [--debug] [--lock-wait N] > [--show-rc] [--no-files-cache] > [--umask M] > [--remote-path PATH] > [-e {none,keyfile,repokey}] [-a] > [REPOSITORY] > borg-linux64-1.0.10rc1 init: error: argument REPOSITORY: Invalid > location format: "ssh://testmachine" > >> Try the normal linux64 binary as well as the pyi321-debug one. > > The py321-debug one does it right too. > >> > adding a space to the end fixes the name: (well obviously testname >> > doesn't exist, but it's looking up the right name now) >> >> Sounds like an off-by-one error. Just strange this did not surface > before. > > Yes - I'm also confused - I used borg on quite a lot of systems and this > is the first that's having this problem. Maybe some environment variable > influencing this that's only set on this system? I didn't find a clue > yet... > > Best Regards > Heiko Helmle > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Fri Feb 10 09:25:54 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 10 Feb 2017 15:25:54 +0100 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: <3999d20c-7a7c-b76c-dffe-e52d0fc5a315@aklaver.com> References: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> <3999d20c-7a7c-b76c-dffe-e52d0fc5a315@aklaver.com> Message-ID: <64df2670-fc43-3001-ef39-bb5e41fb04b3@waldmann-edv.de> Try this with the 1.0.9 binary: borg init ssh://hostname/repopath I suspect this is the sloppy location parsing regex in 1.0.9 which was fixed in 1.0.10rc1 (and not pyinstaller). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From adrian.klaver at aklaver.com Fri Feb 10 10:01:49 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 10 Feb 2017 07:01:49 -0800 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: <64df2670-fc43-3001-ef39-bb5e41fb04b3@waldmann-edv.de> References: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> <3999d20c-7a7c-b76c-dffe-e52d0fc5a315@aklaver.com> <64df2670-fc43-3001-ef39-bb5e41fb04b3@waldmann-edv.de> Message-ID: On 02/10/2017 06:25 AM, Thomas Waldmann wrote: > Try this with the 1.0.9 binary: > > borg init ssh://hostname/repopath > > I suspect this is the sloppy location parsing regex in 1.0.9 which was I forgot to mention previously I am using 1.0.8, so the parsing issue may predate 1.0.9. FYI borg_new below is an artifact of running two version of borg side-side for a while. Still when I run above I get: aklaver at tito:~> borg_new init --remote-path /home/aklaver/bin/borg_new ssh://arkansas/test_repo Remote: Borg 1.0.8: exception in RPC call: Remote: Traceback (most recent call last): Remote: File "borg/remote.py", line 113, in serve Remote: File "borg/remote.py", line 153, in open Remote: File "borg/repository.py", line 81, in __enter__ Remote: File "borg/repository.py", line 102, in create Remote: PermissionError: [Errno 13] Permission denied: '/test_repo' Remote: Platform: Linux arkansas 4.8.6-x86_64-linode78 #1 SMP Tue Nov 1 14:51:21 EDT 2016 x86_64 x86_64 Remote: Linux: debian stretch/sid Remote: Borg: 1.0.8 Python: CPython 3.5.2 Remote: PID: 19323 CWD: /home/aklaver Remote: sys.argv: ['/home/aklaver/bin/borg_new', 'serve', '--umask=077'] Remote: SSH_ORIGINAL_COMMAND: None Remote: /bin/sh: /tmp/_MEIJ2G7ob/libreadline.so.6: no version information available (required by /bin/sh) /bin/sh: relocation error: /bin/sh: symbol rl_filename_stat_hook, version READLINE_6.3 not defined in file libreadline.so.6 with link time reference ('Remote Exception (see remote log for the traceback)', 'PermissionError') Platform: Linux tito 4.4.36-8-default #1 SMP Fri Dec 9 16:18:38 UTC 2016 (3ec5648) x86_64 Linux: openSUSE 42.2 x86_64 Borg: 1.0.8 Python: CPython 3.5.2 PID: 4219 CWD: /home/aklaver sys.argv: ['borg_new', 'init', '--remote-path', '/home/aklaver/bin/borg_new', 'ssh://arkansas/test_repo'] SSH_ORIGINAL_COMMAND: None This works though: aklaver at tito:~> borg_new init --remote-path /home/aklaver/bin/borg_new arkansas:test_repo Enter new passphrase: Enter same passphrase again: > fixed in 1.0.10rc1 (and not pyinstaller). > > -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Fri Feb 10 10:25:18 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 10 Feb 2017 16:25:18 +0100 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: References: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> <3999d20c-7a7c-b76c-dffe-e52d0fc5a315@aklaver.com> <64df2670-fc43-3001-ef39-bb5e41fb04b3@waldmann-edv.de> Message-ID: > Remote: PermissionError: [Errno 13] Permission denied: '/test_repo' This isn't related to this "last letter" issue and obviously a permission issue on your backup server. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From adrian.klaver at aklaver.com Fri Feb 10 10:34:48 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 10 Feb 2017 07:34:48 -0800 Subject: [Borgbackup] borg cutting off last letter of host for remote repos In-Reply-To: References: <1708ef14-841b-30f4-65ea-6229ce44163b@waldmann-edv.de> <3999d20c-7a7c-b76c-dffe-e52d0fc5a315@aklaver.com> <64df2670-fc43-3001-ef39-bb5e41fb04b3@waldmann-edv.de> Message-ID: On 02/10/2017 07:25 AM, Thomas Waldmann wrote: >> Remote: PermissionError: [Errno 13] Permission denied: '/test_repo' > > This isn't related to this "last letter" issue and obviously a > permission issue on your backup server. Yeah because it trying to create the directory from root(/). The host name arkansas is coming from ~/.ssh/config and is an alias for: Host arkansas User aklaver Hostname xxx.xxx.xxx.xx Using: ssh://arkansas:test_repo2 form works as it drops directly into my home directory. Not sure what ssh://arkansas/test_repo is supposed to do? Though this works: borg_new init --remote-path /home/aklaver/bin/borg_new ssh://arkansas//home/aklaver/test_repo2 > > -- Adrian Klaver adrian.klaver at aklaver.com From tw at waldmann-edv.de Mon Feb 13 06:35:57 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 13 Feb 2017 12:35:57 +0100 Subject: [Borgbackup] borgbackup 1.0.10 released Message-ID: <067144fb-5230-ed12-b625-64f23a08c75c@waldmann-edv.de> Bugfix release, details see there: https://github.com/borgbackup/borg/releases/tag/1.0.10 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From jgoerzen at complete.org Mon Feb 13 13:20:40 2017 From: jgoerzen at complete.org (John Goerzen) Date: Mon, 13 Feb 2017 12:20:40 -0600 Subject: [Borgbackup] Poor dedup with tar overlay Message-ID: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> Hi folks, Long story, but I've been running borg over a 60GB filesystem for awhile now. This has been working fine. I had a long thought regarding verifiability, and thought that I could pipe an uncompressed tar of the same data into borg. This should, theoretically, use very little space, since tar has some metadata (highly compressible), and NULL-padded blocks of data. These data blocks would be exact matches for what's already in the borg repo. To my surprise, however, this experiment consumed 12GB after compression and dedup. Any ideas why that might be? My chunker params are at the default. Thanks, John From public at enkore.de Mon Feb 13 13:56:46 2017 From: public at enkore.de (Marian Beermann) Date: Mon, 13 Feb 2017 19:56:46 +0100 Subject: [Borgbackup] Poor dedup with tar overlay In-Reply-To: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> References: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> Message-ID: <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> Hi John, when working on separate files the first block start is implicitly set by the file start. When working on something like a tar archive this is not the case, instead, the tar archive looks something like: header metadata for file #1 contents of file #1 metadata for file #2 contents of file #2 ... So every metadata block that is interlaced between the contents of the adjacent files most likely influences the chunker, and will most likely be included in the last chunk (assuming big-ish files here now) of the preceding, or the first chunk of the following, or split across them. This would mean that there is no efficient deduplication against files that are only 1-2 chunks long. Smaller files (that would not be considered for chunking, <512 kB by default) would not deduplicate at all, since they would be chunked together with their interlaced metadata like a big file. Cheers, Marian On 13.02.2017 19:20, John Goerzen wrote: > Hi folks, > > Long story, but I've been running borg over a 60GB filesystem for awhile > now. This has been working fine. > > I had a long thought regarding verifiability, and thought that I could > pipe an uncompressed tar of the same data into borg. This should, > theoretically, use very little space, since tar has some metadata > (highly compressible), and NULL-padded blocks of data. These data > blocks would be exact matches for what's already in the borg repo. > > To my surprise, however, this experiment consumed 12GB after compression > and dedup. Any ideas why that might be? > > My chunker params are at the default. > > Thanks, > > John > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From jgoerzen at complete.org Mon Feb 13 14:25:34 2017 From: jgoerzen at complete.org (John Goerzen) Date: Mon, 13 Feb 2017 13:25:34 -0600 Subject: [Borgbackup] Poor dedup with tar overlay In-Reply-To: <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> References: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> Message-ID: Thanks -- I think I'm mostly following that. I believe that borg uses a sliding window like rsync, so ought to be able to identify the start of a chunk properly, right? But what you're saying is that we'd have an issue with the last chunk of a file, since in the tar case it could contain NULL padding or metadata for the next file (or even data for the next file), right? I also didn't realize that it didn't attempt to dedup files less than 512KB. (Or is that doesn't attempt to /chunk/ files less than 512KB? I'm a little confused about the implication.) The dataset in question contained about 100,000 files, of which there are probably a great many very small ones. So this is a very helpful conversation. What I'm really after, incidentally, is something like "borg compare" that would take a borg archive and a live filesystem and compare byte-for-byte every file, permission bit, etc. and make sure it's good. I figured that by storing a tar file in the repo, I could approximate this by calculating the sha256sum of it as it goes in, and later extract/compare it at will. Thanks, John On 02/13/2017 12:56 PM, Marian Beermann wrote: > Hi John, > > when working on separate files the first block start is implicitly set > by the file start. When working on something like a tar archive this is > not the case, instead, the tar archive looks something like: > > header metadata for file #1 contents of file #1 metadata for file #2 > contents of file #2 ... > > So every metadata block that is interlaced between the contents of the > adjacent files most likely influences the chunker, and will most likely > be included in the last chunk (assuming big-ish files here now) of the > preceding, or the first chunk of the following, or split across them. > > This would mean that there is no efficient deduplication against files > that are only 1-2 chunks long. > > Smaller files (that would not be considered for chunking, <512 kB by > default) would not deduplicate at all, since they would be chunked > together with their interlaced metadata like a big file. > > Cheers, Marian > > On 13.02.2017 19:20, John Goerzen wrote: >> Hi folks, >> >> Long story, but I've been running borg over a 60GB filesystem for awhile >> now. This has been working fine. >> >> I had a long thought regarding verifiability, and thought that I could >> pipe an uncompressed tar of the same data into borg. This should, >> theoretically, use very little space, since tar has some metadata >> (highly compressible), and NULL-padded blocks of data. These data >> blocks would be exact matches for what's already in the borg repo. >> >> To my surprise, however, this experiment consumed 12GB after compression >> and dedup. Any ideas why that might be? >> >> My chunker params are at the default. >> >> Thanks, >> >> John >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup >> > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Mon Feb 13 14:30:25 2017 From: public at enkore.de (Marian Beermann) Date: Mon, 13 Feb 2017 20:30:25 +0100 Subject: [Borgbackup] Poor dedup with tar overlay In-Reply-To: References: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> Message-ID: On 13.02.2017 20:25, John Goerzen wrote: > Thanks -- I think I'm mostly following that. > > I believe that borg uses a sliding window like rsync, so ought to be > able to identify the start of a chunk properly, right? But what you're > saying is that we'd have an issue with the last chunk of a file, since > in the tar case it could contain NULL padding or metadata for the next > file (or even data for the next file), right? Yes > I also didn't realize that it didn't attempt to dedup files less than > 512KB. (Or is that doesn't attempt to /chunk/ files less than 512KB? > I'm a little confused about the implication.) It doesn't chunk them, since files that are shorter than the minimum chunk size as defined by --chunker-params would always result in one chunk. This doesn't really matter, it's an implementation detail / optimization of the chunker C code. Deduplication is the same as always for these. > The dataset in question contained about 100,000 files, of which there > are probably a great many very small ones. > > So this is a very helpful conversation. What I'm really after, > incidentally, is something like "borg compare" that would take a borg > archive and a live filesystem and compare byte-for-byte every file, > permission bit, etc. and make sure it's good. I figured that by storing > a tar file in the repo, I could approximate this by calculating the > sha256sum of it as it goes in, and later extract/compare it at will. There is borg-diff for between *archives* in the same repo, but not between archive and outside file structure. 1.1beta has some advanced --format options for borg-list, including hashes, which allows to reproduce SHA256SUMS and similar outputs. Another option is to borg-mount an archive and run a recursive diff (diff -r) against it. > Thanks, > > John > > On 02/13/2017 12:56 PM, Marian Beermann wrote: >> Hi John, >> >> when working on separate files the first block start is implicitly set >> by the file start. When working on something like a tar archive this is >> not the case, instead, the tar archive looks something like: >> >> header metadata for file #1 contents of file #1 metadata for file #2 >> contents of file #2 ... >> >> So every metadata block that is interlaced between the contents of the >> adjacent files most likely influences the chunker, and will most likely >> be included in the last chunk (assuming big-ish files here now) of the >> preceding, or the first chunk of the following, or split across them. >> >> This would mean that there is no efficient deduplication against files >> that are only 1-2 chunks long. >> >> Smaller files (that would not be considered for chunking, <512 kB by >> default) would not deduplicate at all, since they would be chunked >> together with their interlaced metadata like a big file. >> >> Cheers, Marian >> >> On 13.02.2017 19:20, John Goerzen wrote: >>> Hi folks, >>> >>> Long story, but I've been running borg over a 60GB filesystem for awhile >>> now. This has been working fine. >>> >>> I had a long thought regarding verifiability, and thought that I could >>> pipe an uncompressed tar of the same data into borg. This should, >>> theoretically, use very little space, since tar has some metadata >>> (highly compressible), and NULL-padded blocks of data. These data >>> blocks would be exact matches for what's already in the borg repo. >>> >>> To my surprise, however, this experiment consumed 12GB after compression >>> and dedup. Any ideas why that might be? >>> >>> My chunker params are at the default. >>> >>> Thanks, >>> >>> John >>> _______________________________________________ >>> Borgbackup mailing list >>> Borgbackup at python.org >>> https://mail.python.org/mailman/listinfo/borgbackup >>> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > From tw at waldmann-edv.de Mon Feb 13 15:20:06 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 13 Feb 2017 21:20:06 +0100 Subject: [Borgbackup] Poor dedup with tar overlay In-Reply-To: <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> References: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> Message-ID: <30da6a2a-bcc1-5ab6-b47b-8bace663033a@waldmann-edv.de> > when working on separate files the first block start is implicitly set > by the file start. When working on something like a tar archive this is > not the case, instead, the tar archive looks something like: > > header metadata for file #1 contents of file #1 metadata for file #2 > contents of file #2 ... > > So every metadata block that is interlaced between the contents of the > adjacent files most likely influences the chunker, and will most likely > be included in the last chunk (assuming big-ish files here now) of the > preceding, or the first chunk of the following, or split across them. BTW, we have a ticket about special chunkers for formats like tar, kind of to simulate separate files by knowing the tar format and chunking at file starts / ends. That is not implemented yet though and I think (if we ever implement that), it should wait until after borg 1.2 (because we will refactor internal architecture then into some separate workers (for worker threads). Likely it will be easier to swap code for some components of borg after that refactoring. I am not sure whether it would be worth it for tar files, though. A even simpler fixed-block chunker could support database files with fixed record size, there is also a ticket about that. We also have a ticket about steering the chunker by file extension, which would be needed to trigger these chunkers while using the normal rolling hash chunker for the rest. But that all is 1.3+ (if ever), so let's rather concentrate on getting 1.1 released and then not packing too much into 1.2, so it can be released in a timely manner. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From roland at micite.net Mon Feb 13 15:25:20 2017 From: roland at micite.net (Roland van Laar) Date: Mon, 13 Feb 2017 21:25:20 +0100 Subject: [Borgbackup] Poor dedup with tar overlay In-Reply-To: <30da6a2a-bcc1-5ab6-b47b-8bace663033a@waldmann-edv.de> References: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> <30da6a2a-bcc1-5ab6-b47b-8bace663033a@waldmann-edv.de> Message-ID: > That is not implemented yet though and I think (if we ever implement > that), it should wait until after borg 1.2 (because we will refactor > internal architecture then into some separate workers (for worker > threads). Likely it will be easier to swap code for some components of > borg after that refactoring. Is there a roadmap for Borg? The workers thread sound like an awesome feature. I write python code for a living. How can I help Borg? Regards, Roland From tw at waldmann-edv.de Mon Feb 13 16:03:06 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 13 Feb 2017 22:03:06 +0100 Subject: [Borgbackup] Poor dedup with tar overlay In-Reply-To: References: <42b15232-a3ca-21d2-e5e9-6f10db636c78@complete.org> <68f2fd14-ef35-b626-2222-1b505c4cccae@enkore.de> <30da6a2a-bcc1-5ab6-b47b-8bace663033a@waldmann-edv.de> Message-ID: <325aa6a9-0aac-5673-a571-870345c354c3@waldmann-edv.de> > Is there a roadmap for Borg? We use github milestones. Some info is also in the github "project" and in some tickets. > The workers thread sound like an awesome feature. Yes, it will enable to put full load on multiple cores. Currently we can only load 1 core and often it is not even loading this 1 core fully, but just waiting for I/O. > I write python code for a living. > How can I help Borg? See the development section of the docs, you could always grab some ticket and work on it. https://borgbackup.readthedocs.io/en/stable/development.html -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From jgoerzen at complete.org Mon Feb 13 16:26:19 2017 From: jgoerzen at complete.org (John Goerzen) Date: Mon, 13 Feb 2017 15:26:19 -0600 Subject: [Borgbackup] nlinks in borg mount Message-ID: <4e807545-ecca-2e53-10b1-f18146107b7b@complete.org> Hi folks, For directories mounted with borg mount, stat seems to always see st_nlink=1. Would it be possible for this to mimic the more standard POSIX behavior? Thanks, John From tw at waldmann-edv.de Mon Feb 13 16:51:52 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 13 Feb 2017 22:51:52 +0100 Subject: [Borgbackup] nlinks in borg mount In-Reply-To: <4e807545-ecca-2e53-10b1-f18146107b7b@complete.org> References: <4e807545-ecca-2e53-10b1-f18146107b7b@complete.org> Message-ID: <1d3be997-d23a-8d2f-66f3-d4bfa68cc530@waldmann-edv.de> > For directories mounted with borg mount, stat seems to always see > st_nlink=1. Would it be possible for this to mimic the more standard > POSIX behavior? I guess so. Why do you need it? -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Mon Feb 13 16:56:59 2017 From: public at enkore.de (Marian Beermann) Date: Mon, 13 Feb 2017 22:56:59 +0100 Subject: [Borgbackup] nlinks in borg mount In-Reply-To: <1d3be997-d23a-8d2f-66f3-d4bfa68cc530@waldmann-edv.de> References: <4e807545-ecca-2e53-10b1-f18146107b7b@complete.org> <1d3be997-d23a-8d2f-66f3-d4bfa68cc530@waldmann-edv.de> Message-ID: On 13.02.2017 22:51, Thomas Waldmann wrote: > >> For directories mounted with borg mount, stat seems to always see >> st_nlink=1. Would it be possible for this to mimic the more standard >> POSIX behavior? > > I guess so. Why do you need it? > btrfs also does nlink=1... I believe this isn't codified anywhere. From eric at in3x.io Mon Feb 13 17:09:47 2017 From: eric at in3x.io (Eric S. Johansson) Date: Mon, 13 Feb 2017 17:09:47 -0500 Subject: [Borgbackup] Duplicating repository and making it independent from the parent In-Reply-To: References: <76c2faaf-e5ad-9179-2a7d-1b4ca6f338bc@in3x.io> <01ee18c6-b919-d394-f53d-95edb451e77b@aklaver.com> Message-ID: <90319567-3ddf-fbc2-29b1-181e08f5e5cd@in3x.io> have a better understanding of my problem but no fix.. On 2/9/2017 12:54 PM, Eric S. Johansson wrote: > > For a few weeks I backed up local data into local repository. Using > rsync, I replicated the repository to a remote machine. Now I have two > repositories local repository and remote repository. > > client machine ?borg create?> repo1 > repo1 ?copy?> repo2 > > I ran Borg to update the remote repository and I was told that the > remote repository originally belonged to the local address and I was > given the option of reassigning it to the remote address. When I updated > the local repository, I was told that belong to the remote address and > given the option of reassigning it to the local address. When I did > that, I was told the cache was newer indicating something bad happened. > > client machine ?borg create?> repo2 > # Got warning here about repository belonging to a local location > > client machine ?borg create?> repo1 > # Got warning here about repository belonging to remote (repo2) location > and was told that the cache was newer than the repository. > > I'm redoing the replication to make sure I didn't mess things up. I'll > let you know if was able to reproduce the problem or not. I think I figured it out. Don't know how to fix it but apparently when I replicated the repository, I replicated everything Which created two repositories with the same ID hence the error messages: ---- /Warning: The repository at location ssh://x at y.dyndns.org/mnt/borg/system was previously located at /onboard/localborg/system// //Do you want to continue? [yN] y// //Cache is newer than repository - this is either an attack or unsafe (multiple repos with same ID)/ ---- I think the question I should be asking is once I've copied a repository with rsync, how do I give it a new ID? I believe getting a new repository ID will give me what I want, a baseline replica of the local Borg repository that I can treat independently for future updates directly by Borg -- Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgoerzen at complete.org Mon Feb 13 17:14:11 2017 From: jgoerzen at complete.org (John Goerzen) Date: Mon, 13 Feb 2017 16:14:11 -0600 Subject: [Borgbackup] nlinks in borg mount In-Reply-To: References: <4e807545-ecca-2e53-10b1-f18146107b7b@complete.org> <1d3be997-d23a-8d2f-66f3-d4bfa68cc530@waldmann-edv.de> Message-ID: <79c7fa35-5153-185c-2dce-26577b44c841@complete.org> I was trying to use mtree to compare the pre-backed-up and backed-up trees, and nlinks were always different on directories. I guess if btrfs is also an exception maybe I need to find a different method. John On 02/13/2017 03:56 PM, Marian Beermann wrote: > On 13.02.2017 22:51, Thomas Waldmann wrote: >>> For directories mounted with borg mount, stat seems to always see >>> st_nlink=1. Would it be possible for this to mimic the more standard >>> POSIX behavior? >> I guess so. Why do you need it? >> > btrfs also does nlink=1... I believe this isn't codified anywhere. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From luc at spaceroots.org Thu Feb 23 06:20:32 2017 From: luc at spaceroots.org (Luc Maisonobe) Date: Thu, 23 Feb 2017 12:20:32 +0100 Subject: [Borgbackup] debugging BORG_PASSPHRASE issue Message-ID: Hi all, I am new to borg, and really happy with what I learn, thanks for the great work. I am having an issue with BORG_PASSPHRASE. If I run a "borg list" command on the original machine that did run "borg init" and "borg create", I get a result. If I do the same from the remote server holding the repository, everything is fine too. If I do this from a third machine, I get the following error message: passphrase supplied in BORG_PASSPHRASE is incorrect I am *sure* the passphrase is correct. I checked it visually, I did use copy/paste between the various ssh terminals, I used keypass to make sure I do not mistype something. Both the hosts that created the repo and the host on which I want to view the saved data connect to the server remotely using ssh. Both use BORG_RSH to specify the ssh key. Of course, I have also checked the same rsa key is on both machines (I copied it from one machine to the other, and also compared checksums). The server is configured as suggested in the doc with several different restricted paths depending on the ssh key provided in .ssh/authorized_keys. All machines are running linux (debian) and borg 1.0.9. This problem occurs only for one machine. On all other hosts I backup, I can do this (i.e. checking on one desktop machine what has been saved by a few servers). Each machine has its own set of passphrase, ssh key and repository. I tried to use borg list --info and even --debug to get some hints about what was going wrong, but got really few information. Here is what I get: (lehrin) luc% BORG_REPO="@:" \ BORG_RSH="ssh -i ~/.ssh/id_rsa_the_key_I_want" \ BORG_PASSPHRASE="IAmSureThisIsCorrect" \ borg list --debug using builtin fallback logging configuration Remote: using builtin fallback logging configuration passphrase supplied in BORG_PASSPHRASE is incorrect (lehrin) luc% So how could I get more information about what the client and server try to do? best regards, Luc From ndbecker2 at gmail.com Thu Feb 23 06:45:41 2017 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 23 Feb 2017 11:45:41 +0000 Subject: [Borgbackup] prune before create? Message-ID: I saw some advice a few weeks ago on this list that it made more sense in my regular backup script, to prune first, then create new backup (than the other way around, as in the example script). I've been using this since, but I'm wondering, in case of some failure, wouldn't I rather backup first then prune? -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.klaver at aklaver.com Thu Feb 23 08:56:25 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Thu, 23 Feb 2017 05:56:25 -0800 Subject: [Borgbackup] debugging BORG_PASSPHRASE issue In-Reply-To: References: Message-ID: <8401d565-6bac-351c-9ebe-bdfd2789cb6f@aklaver.com> On 02/23/2017 03:20 AM, Luc Maisonobe wrote: > Hi all, > > I am new to borg, and really happy with what I learn, thanks for the > great work. > > I am having an issue with BORG_PASSPHRASE. If I run a "borg list" > command on the original machine that did run "borg init" and > "borg create", I get a result. If I do the same from the remote > server holding the repository, everything is fine too. If I do > this from a third machine, I get the following error message: > > passphrase supplied in BORG_PASSPHRASE is incorrect > > I am *sure* the passphrase is correct. I checked it visually, I > did use copy/paste between the various ssh terminals, I used > keypass to make sure I do not mistype something. > > Both the hosts that created the repo and the host on which I > want to view the saved data connect to the server remotely > using ssh. Both use BORG_RSH to specify the ssh key. Of course, > I have also checked the same rsa key is on both machines (I > copied it from one machine to the other, and also compared > checksums). The server is configured as suggested in the > doc with several different restricted paths depending on the > ssh key provided in .ssh/authorized_keys. All machines are > running linux (debian) and borg 1.0.9. > > This problem occurs only for one machine. On all other hosts > I backup, I can do this (i.e. checking on one desktop machine > what has been saved by a few servers). Each machine has its > own set of passphrase, ssh key and repository. > > I tried to use borg list --info and even --debug to get some > hints about what was going wrong, but got really few information. > Here is what I get: > > (lehrin) luc% BORG_REPO="@:" \ > BORG_RSH="ssh -i ~/.ssh/id_rsa_the_key_I_want" \ > BORG_PASSPHRASE="IAmSureThisIsCorrect" \ > borg list --debug > using builtin fallback logging configuration > Remote: using builtin fallback logging configuration > passphrase supplied in BORG_PASSPHRASE is incorrect > (lehrin) luc% Encoding issues? What happens if you put the passphrase in a file and scp that to the remote machine and then look inside that file on the remote machine? > > So how could I get more information about what the client > and server try to do? > > best regards, > Luc > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Adrian Klaver adrian.klaver at aklaver.com From adrian.klaver at aklaver.com Thu Feb 23 09:02:36 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Thu, 23 Feb 2017 06:02:36 -0800 Subject: [Borgbackup] prune before create? In-Reply-To: References: Message-ID: On 02/23/2017 03:45 AM, Neal Becker wrote: > I saw some advice a few weeks ago on this list that it made more sense > in my regular backup script, to prune first, then create new backup > (than the other way around, as in the example script). I've been using > this since, but I'm wondering, in case of some failure, wouldn't I > rather backup first then prune? Well the prune will be working from the oldest to newest so your most recent backup(s) prior to the failure will still be there. The number of those recent backups will be dependent on how often you create archives and how aggressively you prune. If this does not answer your question, is there a specific concern you have? -- Adrian Klaver adrian.klaver at aklaver.com From luc at spaceroots.org Thu Feb 23 12:55:43 2017 From: luc at spaceroots.org (Luc Maisonobe) Date: Thu, 23 Feb 2017 18:55:43 +0100 Subject: [Borgbackup] debugging BORG_PASSPHRASE issue In-Reply-To: <8401d565-6bac-351c-9ebe-bdfd2789cb6f@aklaver.com> References: <8401d565-6bac-351c-9ebe-bdfd2789cb6f@aklaver.com> Message-ID: <12213b21-035a-1676-8d23-b5291178ca1f@spaceroots.org> Le 23/02/2017 ? 14:56, Adrian Klaver a ?crit : > On 02/23/2017 03:20 AM, Luc Maisonobe wrote: >> Hi all, >> >> I am new to borg, and really happy with what I learn, thanks for the >> great work. >> >> I am having an issue with BORG_PASSPHRASE. If I run a "borg list" >> command on the original machine that did run "borg init" and >> "borg create", I get a result. If I do the same from the remote >> server holding the repository, everything is fine too. If I do >> this from a third machine, I get the following error message: >> >> passphrase supplied in BORG_PASSPHRASE is incorrect >> >> I am *sure* the passphrase is correct. I checked it visually, I >> did use copy/paste between the various ssh terminals, I used >> keypass to make sure I do not mistype something. >> >> Both the hosts that created the repo and the host on which I >> want to view the saved data connect to the server remotely >> using ssh. Both use BORG_RSH to specify the ssh key. Of course, >> I have also checked the same rsa key is on both machines (I >> copied it from one machine to the other, and also compared >> checksums). The server is configured as suggested in the >> doc with several different restricted paths depending on the >> ssh key provided in .ssh/authorized_keys. All machines are >> running linux (debian) and borg 1.0.9. >> >> This problem occurs only for one machine. On all other hosts >> I backup, I can do this (i.e. checking on one desktop machine >> what has been saved by a few servers). Each machine has its >> own set of passphrase, ssh key and repository. >> >> I tried to use borg list --info and even --debug to get some >> hints about what was going wrong, but got really few information. >> Here is what I get: >> >> (lehrin) luc% BORG_REPO="@:" \ >> BORG_RSH="ssh -i ~/.ssh/id_rsa_the_key_I_want" \ >> BORG_PASSPHRASE="IAmSureThisIsCorrect" \ >> borg list --debug >> using builtin fallback logging configuration >> Remote: using builtin fallback logging configuration >> passphrase supplied in BORG_PASSPHRASE is incorrect >> (lehrin) luc% > > Encoding issues? > > What happens if you put the passphrase in a file and scp that to the > remote machine and then look inside that file on the remote machine? The content seems identical on both machines. In fact, the password is extremely long but contains only random printable ASCII characters. Default encoding for users is UTF8 in all machines. I have made some progress in searching the issue. The problem seems to be user-dependent rather than machine-dependent. If on any machine I run the "borg list" command from root, it works, if I run it from my regular user, it fails. The same success/failure occur on both the original machine and the desktop machine. I don't think it is related to weird sudo and environment variables because in order to become root I first did "sudo -i" to get a shell, and in this root shell I run the command. I even put the ssh keys temporarily in /tmp with world read access to make sure I use the exact same keys regardless of the user. Here is an example, simply copy/paste from an attempt on the desktop machine (I have only redacted the hostname, password, etc as I sent this to a public list and folded lines for the mail): (lehrin) luc% sudo -i [sudo] Mot de passe de luc : root at lehrin:~# BORG_REPO="@:" \ BORG_RSH="ssh -i /tmp/id1" \ BORG_PASSPHRASE="xxx" borg list marislae-2017-02-21 Tue, 2017-02-21 21:39:00 marislae-2017-02-23 Thu, 2017-02-23 11:07:12 root at lehrin:~# d?connexion <-- here I simply typed Ctrl-D (lehrin) luc% BORG_REPO="@:" \ BORG_RSH="ssh -i /tmp/id1" \ BORG_PASSPHRASE="xxx" borg list passphrase supplied in BORG_PASSPHRASE is incorrect (lehrin) luc% Is the passphrase somehow salted with user name or user id? I don't think so because it works for other hosts which were also created by their own super user. I also suspected something related to virtualenv, so I unset the workon function and virtualenv related variables, but still got the issue. I also compared environments in the two settings above (root and regular user) by running "env" instead of "borg list", but environment variables seem OK. Luc > > >> >> So how could I get more information about what the client >> and server try to do? >> >> best regards, >> Luc >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > > From public at enkore.de Thu Feb 23 13:03:46 2017 From: public at enkore.de (Marian Beermann) Date: Thu, 23 Feb 2017 19:03:46 +0100 Subject: [Borgbackup] debugging BORG_PASSPHRASE issue In-Reply-To: <12213b21-035a-1676-8d23-b5291178ca1f@spaceroots.org> References: <8401d565-6bac-351c-9ebe-bdfd2789cb6f@aklaver.com> <12213b21-035a-1676-8d23-b5291178ca1f@spaceroots.org> Message-ID: <1aa3bb0a-3ddb-4cc3-a7a5-3bca7243c00f@enkore.de> On 23.02.2017 18:55, Luc Maisonobe wrote: > Is the passphrase somehow salted with user name or user id? I don't > think so because it works for other hosts which were also > created by their own super user. Nope, not salted / mixed with user IDs oder names. Other than that I don't have any ideas to contribute here right now. Cheers, Marian From luc at spaceroots.org Thu Feb 23 14:15:57 2017 From: luc at spaceroots.org (Luc Maisonobe) Date: Thu, 23 Feb 2017 20:15:57 +0100 Subject: [Borgbackup] debugging BORG_PASSPHRASE issue (SOLVED) In-Reply-To: <1aa3bb0a-3ddb-4cc3-a7a5-3bca7243c00f@enkore.de> References: <8401d565-6bac-351c-9ebe-bdfd2789cb6f@aklaver.com> <12213b21-035a-1676-8d23-b5291178ca1f@spaceroots.org> <1aa3bb0a-3ddb-4cc3-a7a5-3bca7243c00f@enkore.de> Message-ID: <88b9c5ff-3e2d-0e61-69d9-9443dfdd9663@spaceroots.org> Le 23/02/2017 ? 19:03, Marian Beermann a ?crit : > On 23.02.2017 18:55, Luc Maisonobe wrote: >> Is the passphrase somehow salted with user name or user id? I don't >> think so because it works for other hosts which were also >> created by their own super user. > > Nope, not salted / mixed with user IDs oder names. > > Other than that I don't have any ideas to contribute here right now. OK, I have found the issue. As I explained at the beginning, I have several hosts to backup, and I check the result from my desktop machine. My regular account on the desktop machine therefore has a copy of all ssh keys needed to connect. The BORG_RSH environment variable is used to select the proper key using ssh options: BORG_RSH="ssh -i ~/.ssh/id_rsa_the_key_I_want" However, this is not the all story for openssh. In fact the "-i" option just load one key, to be able to propose it to the server during the initial handshake, but it may also propose all the keys that can come from an authenticating agent for example, which itself can read every keys in ~/.ssh. In my case, it happened that the first key proposed was not id_rsa_the_key_I_want but another key, corresponding to another host, and this key was of course accepted by "borg serve" because its public hakf was in borg authorized_keys file! So borg selected another repository, and triyed to open it with the supplied passphrase. As each of my repositories have a different passphrase, borg correctly told me the passphrase was wrong. In order to solve the issue, I simply added another option to tell ssh to use *only* the identity supplied with the -i option, so the environment variable setting ended up as: BORG_RSH="ssh -i ~/.ssh/id_rsa_the_key_I_want -o IdentitiesOnly=yes" I would suggest two things to the development team. The first one would be to add the "-o IdentitiesOnly=yes" part in the example for BORG_RSH variable documentation (I read this online at ). The second one would be to have more debug info when the --debug option is used. I tried this option both in the "borg list" command (you have seen earlier that the output was quite small). I also tried it in the "borg serve" commands in the autohrized_keys file on server, and there I simply had no additional output at all. At least adding something like which repository is selected before it is opened would be nice. best regards, Luc > > Cheers, Marian > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From maurice.libes at osupytheas.fr Mon Feb 27 13:00:21 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Mon, 27 Feb 2017 19:00:21 +0100 Subject: [Borgbackup] deduplication understanding and best practice? Message-ID: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> hello to all I guess it is a FAQ but, never mind I have a question as concerning the borg deduplication understanding and best practices Let's say I want to backup several different Linux PC , in order to benefit of the blocs deduplication is it better to : i) make a "all in one" linux borg-repository in which I backup several PC in the aim of benefit of identical blocs deduplication of many PC or ii) different repositories : one by different PC many thanks for your advices or links ML I made a test, and in case i) I don't see an apparent benefit when I backup a Linux pc "A" into the repository filled by another Linux PC "B" i) backup of a PC "A" into its own repository : borg create -v --info --stats borg at myborgserver.univ.fr:/mnt/provigo-borg//sauve-pcA/::baklouti-homes-2016-01-17 /home/ /Time (start): Tue, 2017-01-17 17:31:50/ /Time (end): Tue, 2017-01-17 22:33:38/ /*Duration: 5 hours 1 minutes 47.98 seconds* //Number of files: 1384552//------------------------------------------------------------------------------/ /Original size Compressed size Deduplicated size/ /This archive: 334.27 GB 334.34 GB /*222.57 GB* /All archives: 334.27 GB 334.34 GB 222.57 GB/ /Unique chunks Total chunks/ /Chunk index: 1231378 1469238/ ii) backup of same PC "A" into a "all in one" repository where another PC has been already backuped borg create -v --info --stats borg at rancid.mio.univ-amu.fr:/mnt/provigo-borg//sauve-ALLpc/::baklouti-homes-2016-01-17 /home/ ------------------------------------------------------------------------------ Archive name: baklouti-homes-2016-01-17 Time (start): Wed, 2017-01-18 09:43:54 Time (end): Wed, 2017-01-18 14:34:56 *Duration: 4 hours 51 minutes 1.71 seconds* Number of files: 1384555 ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 334.28 GB 334.34 GB /*222.39 GB*/ All archives: 3.52 TB 3.50 TB 487.03 GB Unique chunks Total chunks Chunk index: 1651581 8721475 -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 -------------- next part -------------- An HTML attachment was scrubbed... URL: From luc at spaceroots.org Mon Feb 27 14:00:28 2017 From: luc at spaceroots.org (Luc Maisonobe) Date: Mon, 27 Feb 2017 20:00:28 +0100 Subject: [Borgbackup] deduplication understanding and best practice? In-Reply-To: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> References: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> Message-ID: <50b0ad47-2db6-3e3c-2276-8f88ec3ddcbf@spaceroots.org> Le 27/02/2017 ? 19:00, Maurice Libes a ?crit : > hello to all Hi Maurice, > > I guess it is a FAQ but, never mind I have a question as concerning the > borg deduplication understanding and best practices > > Let's say I want to backup several different Linux PC , > in order to benefit of the blocs deduplication is it better to : > > i) make a "all in one" linux borg-repository in which I backup several PC > in the aim of benefit of identical blocs deduplication of many PC > or > ii) different repositories : one by different PC Just a newbie answer here, but as far as I understand, each repository is completely independent. Therefore if two different hosts are saved in two different repositories, a file shared by these two hosts will be saved twice (once per repository). So if your concern is to save space, you should use a single repository. If you don't really care about space (as it won't inflate too much after the first backup), then you can use 2 different repositories. Once again, this is only a newbie understanding, so it may be totally wrong. The experts will correct me. best regards, Luc > > > many thanks for your advices or links > > ML > > > I made a test, and in case i) I don't see an apparent benefit when I > backup a Linux pc "A" into the repository filled by another Linux PC "B" > > > i) backup of a PC "A" into its own repository : > > borg create -v --info --stats > borg at myborgserver.univ.fr:/mnt/provigo-borg//sauve-pcA/::baklouti-homes-2016-01-17 > /home/ > > /Time (start): Tue, 2017-01-17 17:31:50/ > > /Time (end): Tue, 2017-01-17 22:33:38/ > /*Duration: 5 hours 1 minutes 47.98 seconds* > //Number of files: > 1384552//------------------------------------------------------------------------------/ > /Original size Compressed size Deduplicated size/ > /This archive: 334.27 GB 334.34 GB /*222.57 GB* > /All archives: 334.27 GB 334.34 GB 222.57 GB/ > /Unique chunks Total chunks/ > /Chunk index: 1231378 1469238/ > > > ii) backup of same PC "A" into a "all in one" repository where another > PC has been already backuped > > borg create -v --info --stats > borg at rancid.mio.univ-amu.fr:/mnt/provigo-borg//sauve-ALLpc/::baklouti-homes-2016-01-17 > /home/ > > > > ------------------------------------------------------------------------------ > Archive name: baklouti-homes-2016-01-17 > Time (start): Wed, 2017-01-18 09:43:54 > Time (end): Wed, 2017-01-18 14:34:56 > *Duration: 4 hours 51 minutes 1.71 seconds* > Number of files: 1384555 > ------------------------------------------------------------------------------ > Original size Compressed size Deduplicated size > This archive: 334.28 GB 334.34 GB /*222.39 GB*/ > All archives: 3.52 TB 3.50 TB 487.03 GB > Unique chunks Total chunks > Chunk index: 1651581 8721475 > > > -- > M. LIBES > Service Informatique OSU Pytheas - UMS 3470 CNRS > Batiment Oceanomed > Campus de Luminy > 13288 Marseille cedex 9 > Tel: 04860 90529 > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From tw at waldmann-edv.de Mon Feb 27 14:13:46 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 27 Feb 2017 20:13:46 +0100 Subject: [Borgbackup] deduplication understanding and best practice? In-Reply-To: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> References: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> Message-ID: <86897e29-a32d-459d-a63b-c810b6298332@waldmann-edv.de> > i) make a "all in one" linux borg-repository in which I backup several PC > in the aim of benefit of identical blocs deduplication of many PC Do this if you have time, but you want to save space (esp. if you have considerable duplication amongst the machines). Also consider that in all cases, you will benefit from lots of historical dedup, no matter whether you have 1 repo or multiple. > ii) different repositories : one by different PC Do this if you want to have fast backups (no chunks cache resync needed, see FAQ) or if your machines do not have much inter-machine duplication anyway. > I made a test, and in case i) I don't see an apparent benefit when I > backup a Linux pc "A" into the repository filled by another Linux PC "B" If it is same linux and same updates, it should dedup the OS. Of course, if you also have a lot of (different) data on them, the amount of OS might be little when compared to the amount of data. When it does inter-machine dedup, it is not as fast as the "unchanged file" detection on same machine. It will have to read and chunk all files at first backup, it will just not send the data to the repo. After the first backup, the "unchanged file" detection on same machine will kick in and speed it up a lot (if a lot of files did not change since last backup). > ii) backup of same PC "A" into a "all in one" repository where another > PC has been already backuped > > borg create -v --info --stats > borg at rancid.mio.univ-amu.fr:/mnt/provigo-borg//sauve-ALLpc/::baklouti-homes-2016-01-17 > /home/ > This archive: 334.28 GB 334.34 GB /*222.39 GB*/ That doesn't look like the same data was already in the repo. BTW, besides actually having dedupable data, you also need to make sure you use same chunker params. If the chunks get cut differently, it won't dedup. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From maurice.libes at osupytheas.fr Tue Feb 28 04:26:49 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Tue, 28 Feb 2017 10:26:49 +0100 Subject: [Borgbackup] deduplication understanding and best practice? In-Reply-To: <86897e29-a32d-459d-a63b-c810b6298332@waldmann-edv.de> References: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> <86897e29-a32d-459d-a63b-c810b6298332@waldmann-edv.de> Message-ID: Le 27/02/2017 ? 20:13, Thomas Waldmann a ?crit : >> i) make a "all in one" linux borg-repository in which I backup >> several PC >> in the aim of benefit of identical blocs deduplication of many PC > > Do this if you have time, but you want to save space (esp. if you have > considerable duplication amongst the machines). that's not easy to know before but I understand a single repository can save space (If we have considerable duplication ) , but not time > > Also consider that in all cases, you will benefit from lots of > historical dedup, no matter whether you have 1 repo or multiple. > >> ii) different repositories : one by different PC > > Do this if you want to have fast backups (no chunks cache resync > needed, see FAQ) or if your machines do not have much inter-machine > duplication anyway. > ok >> I made a test, and in case i) I don't see an apparent benefit when I >> backup a Linux pc "A" into the repository filled by another Linux PC "B" > > If it is same linux and same updates, it should dedup the OS. Of > course, if you also have a lot of (different) data on them, the amount > of OS might be little when compared to the amount of data. > I have not said that in this test, I was backuping only the users /home/ directories not the OS So indeed files are not the same between the /home directories of these 2 PC BUT, in my "imaginary" understanding I was thinking that "statistically" there could be a lot of duplication of blocs even in case of different files ? I mean "n" differents files (data files, netcdf, C, fortran, program file, pictures, etc...) , have surely many identical blocs sequences , no ? > When it does inter-machine dedup, it is not as fast as the "unchanged > file" detection on same machine. It will have to read and chunk all > files at first backup, it will just not send the data to the repo. ok I see > > After the first backup, the "unchanged file" detection on same machine > will kick in and speed it up a lot (if a lot of files did not change > since last backup). > >> ii) backup of same PC "A" into a "all in one" repository where another >> PC has been already backuped >> >> borg create -v --info --stats >> borg at rancid.mio.univ-amu.fr:/mnt/provigo-borg//sauve-ALLpc/::baklouti-homes-2016-01-17 >> >> /home/ >> This archive: 334.28 GB 334.34 GB /*222.39 GB*/ > > That doesn't look like the same data was already in the repo. Yes you're right, but I re write my think above: I believed that , even in case of different files , It could be /statistically/ many identical chunk sequences ? that could benefit to the whole backuped PC a priori this is not the case in my single test where I have backuped /home/* directories from 2 distincts PC > > BTW, besides actually having dedupable data, you also need to make > sure you use same chunker params. If the chunks get cut differently, > it won't dedup. > Ah that's a thing I didnt know, Where can I see and configure these chunker params ? RTFM? thanks for this interesting thread ML -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Tue Feb 28 08:29:27 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 28 Feb 2017 14:29:27 +0100 Subject: [Borgbackup] deduplication understanding and best practice? In-Reply-To: References: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> <86897e29-a32d-459d-a63b-c810b6298332@waldmann-edv.de> Message-ID: >>> i) make a "all in one" linux borg-repository in which I backup >>> several PC >>> in the aim of benefit of identical blocs deduplication of many PC >> >> Do this if you have time, but you want to save space (esp. if you have >> considerable duplication amongst the machines). > > that's not easy to know before > but I understand > a single repository can save space (If we have considerable duplication > ) , but not time It's not just "not saving time", it will need quite some additional time for chunks cache resync. > I have not said that in this test, I was backuping only the users > /home/ directories > not the OS > > So indeed files are not the same between the /home directories of these 2 PC > > BUT, in my "imaginary" understanding I was thinking that > "statistically" there could be a lot of duplication of blocs even in > case of different files ? Well, if one looks with a ~2MiB granularity at your data, there aren't many identical chunks. When looking with finer granularity, it might discover increasingly more, but all these chunks need to get managed. That is the reason why we use ~2MiB (and not e.g. 64KiB any more, like attic and early borg did - the management overhead was just too big). > I mean "n" differents files (data files, netcdf, C, fortran, program > file, pictures, etc...) , have surely many identical blocs sequences , no ? No. Likely, the only widespread common block (in files of different descent) is the all-zero block. So, for small files (<512KiB), just assume that non-identical files won't dedup at all. For large files (>>2 MiB), assume there will be some dedup if they are of common descent at least. E.g. same virtual machine disk file in different states / ages. > Ah that's a thing I didnt know, Where can I see and configure these > chunker params ? RTFM? Yes, --chunker-params. But just keep the default except if you have quite specific knowledge that you need something different. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tmhikaru at gmail.com Wed Mar 1 01:22:14 2017 From: tmhikaru at gmail.com (tmhikaru at gmail.com) Date: Tue, 28 Feb 2017 22:22:14 -0800 Subject: [Borgbackup] deduplication understanding and best practice? In-Reply-To: References: <8c45d93f-a449-e306-520d-3efa8ef03836@osupytheas.fr> <86897e29-a32d-459d-a63b-c810b6298332@waldmann-edv.de> Message-ID: <20170301062214.GB3767@raspberrypi.home> On Tue, Feb 28, 2017 at 02:29:27PM +0100, Thomas Waldmann wrote: > >>> i) make a "all in one" linux borg-repository in which I backup > >>> several PC > >>> in the aim of benefit of identical blocs deduplication of many PC > >> > >> Do this if you have time, but you want to save space (esp. if you have > >> considerable duplication amongst the machines). > > > > that's not easy to know before > > but I understand > > a single repository can save space (If we have considerable duplication > > ) , but not time > > It's not just "not saving time", it will need quite some additional time > for chunks cache resync. > Waldmann really isn't kidding here, regenerating the chunks cache can take quite a lot of time, even on a powerful machine. Be aware that using a borg client on a weak cpu with a very large repository on the server will rely entirely on the *client machine* to do all of the work - resyncing cache, chunking and compressing the data that needs to be written, you name it. I tried this, it just doesn't work well. If you run into problems where the client takes a long time to resync one of your computers, or even worse gets routinely stuck trying to, it'd probably be a good idea to make that computer have its own repository. Tim From public at enkore.de Thu Mar 2 05:59:58 2017 From: public at enkore.de (Marian Beermann) Date: Thu, 2 Mar 2017 11:59:58 +0100 Subject: [Borgbackup] Performance of using Borg on SMR drives Message-ID: Did someone try this yet? >From reviews like http://www.tomsitpro.com/articles/seagate-8tb-archive-hdd-review,2-822-3.html I take it that the banded nature of a device-managed SMR drive means that low-concurrency sequential writes and no overwrites are best for these drives -- just like Borg does. However, I think the 5 MiB segment size (+fsync) used in 1.0.x might be very bad for performance here (since the bands are larger than 5 MiB). 500 MiB (1.1.x) works maybe better there. Cheers, Marian From luc at spaceroots.org Thu Mar 2 16:13:49 2017 From: luc at spaceroots.org (Luc Maisonobe) Date: Thu, 2 Mar 2017 22:13:49 +0100 Subject: [Borgbackup] Issues starting a new repository by cloning an existing one Message-ID: <41dc7819-88aa-3c11-2604-d5ca5a44c50f@spaceroots.org> Hi all, I encounter some issues with a probably not frequent use case. My ultimate goal is to have two backup servers at distant locations, in order to mitigate site disaster (say a fire in my house where one server is, or simply a theft). I started creating the repository on a small server locally, on the same local network as the clients, so there are no bandwidth problems. This works well. Then I attempted the second phase of my setup, with a remote server hosted 60 kilometers away, in a place where I cannot go very easily (it is far, I have to plan the visit with the guy responsible of the room ...). As I was learning to use borg and experimenting at the same time, it was not practical to go and copy data every day. So I decided to simply do everything remotely, and mainly experiment with borg locally and rsync the local repositories at my (slow) internet connection rate. From the beginning, this was intended only as a temporary setup, as borg FAQ recommends to use borg create directly with each server rather than copying the repositories. Also as I noticed that when a repository is copied and then used from the new location, it keeps emitting warnings about location change, so copy would indeed be inconvenient on the long run. As the first backup involved almost no deduplication, it was huge. The rsync was therefore split in several days. In fact, given the reduced bandwidth of my internet access, it took between one and two weeks to complete. Then, I had my two servers with identical data. I could therefore stop doing the rsync and start using borg twice, as per FAQ recommendation. However, I kept getting errors about "Cache is newer than repository - this is either an attack or unsafe (multiple repos with same ID)". This seemed odd to me as the repositories are really identical at this step and all machines are synchornised to UTC using NTP. I did a last backup at home and a last rsync to be sure, but still got the problem. It was not a simple warning, it really aborted the backup. I finally succeeded, after several attempts, so now everything is fine. However, I wonder if this use case is really supported: starting a repository by first cloning an existing one and later on having both repositories live their own independent lifes. What is this repo ID? Is there a way to change it if one considers it is bad practice to have different repositories with the same ID (even when they are on different servers at different locations)? best regards, Luc From public at enkore.de Thu Mar 2 17:06:21 2017 From: public at enkore.de (Marian Beermann) Date: Thu, 2 Mar 2017 23:06:21 +0100 Subject: [Borgbackup] Issues starting a new repository by cloning an existing one In-Reply-To: <41dc7819-88aa-3c11-2604-d5ca5a44c50f@spaceroots.org> References: <41dc7819-88aa-3c11-2604-d5ca5a44c50f@spaceroots.org> Message-ID: <6812b4e7-b72a-d891-afb2-6fe976230fa2@enkore.de> On 02.03.2017 22:13, Luc Maisonobe wrote: > However, I wonder if this use case is really supported: starting a > repository by first cloning an existing one and later on having both > repositories live their own independent lifes. This is unsupported for encrypted repositories at this time. For unencrypted repositories it's ok, but needs manual work to work correctly (cont'd) > What is this repo ID? Globally unique identifier for each repository to match the repository to it's encryption keys and the cache, independent of repository location. It is stored in the "config" file in the repository. (cont'd) > Is there a way to change it if one considers it is bad practice to > have different repositories with the same ID (even when they are on > different servers at different locations)? Edit the "config" file and change the ID there in one of the repositories. Cheers, Marian From public at enkore.de Thu Mar 2 17:07:32 2017 From: public at enkore.de (Marian Beermann) Date: Thu, 2 Mar 2017 23:07:32 +0100 Subject: [Borgbackup] Performance of using Borg on SMR drives In-Reply-To: References: Message-ID: <5ff322b2-18d7-cc8c-381b-da8f9e05fc69@enkore.de> On 02.03.2017 11:59, Marian Beermann wrote: > Did someone try this yet? > > From reviews like > http://www.tomsitpro.com/articles/seagate-8tb-archive-hdd-review,2-822-3.html > I take it that the banded nature of a device-managed SMR drive means > that low-concurrency sequential writes and no overwrites are best for > these drives -- just like Borg does. > > However, I think the 5 MiB segment size (+fsync) used in 1.0.x might be > very bad for performance here (since the bands are larger than 5 MiB). > 500 MiB (1.1.x) works maybe better there. > > Cheers, Marian > ^- I ordered one now to try it, the Samsung Archive one which is device-managed SMR. We'll see. From gmatht at gmail.com Fri Mar 3 09:39:52 2017 From: gmatht at gmail.com (John McCabe-Dansted) Date: Fri, 3 Mar 2017 22:39:52 +0800 Subject: [Borgbackup] Performance of using Borg on SMR drives In-Reply-To: <5ff322b2-18d7-cc8c-381b-da8f9e05fc69@enkore.de> References: <5ff322b2-18d7-cc8c-381b-da8f9e05fc69@enkore.de> Message-ID: I haven't got borgbackup to work well on SMR drives. I get hardware errors and a stale lock file 100GB or so, but I have only tried 1.0.x. Maybe I should try again with 1.1.x On 3 March 2017 at 06:07, Marian Beermann wrote: > On 02.03.2017 11:59, Marian Beermann wrote: > > Did someone try this yet? > > > > From reviews like > > http://www.tomsitpro.com/articles/seagate-8tb-archive- > hdd-review,2-822-3.html > > I take it that the banded nature of a device-managed SMR drive means > > that low-concurrency sequential writes and no overwrites are best for > > these drives -- just like Borg does. > > > > However, I think the 5 MiB segment size (+fsync) used in 1.0.x might be > > very bad for performance here (since the bands are larger than 5 MiB). > > 500 MiB (1.1.x) works maybe better there. > > > > Cheers, Marian > > > > ^- I ordered one now to try it, the Samsung Archive one which is > device-managed SMR. > > We'll see. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- John C. McCabe-Dansted -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Fri Mar 3 09:44:16 2017 From: public at enkore.de (Marian Beermann) Date: Fri, 3 Mar 2017 15:44:16 +0100 Subject: [Borgbackup] Performance of using Borg on SMR drives In-Reply-To: References: <5ff322b2-18d7-cc8c-381b-da8f9e05fc69@enkore.de> Message-ID: <3038911a-7f75-c0a4-7abb-c7a7cbc2a216@enkore.de> I saw a long thread in the Linux Kernel Bugzilla, some versions of the kernel apparently had a lot of problems with device-managed SMR (IO errors, timeouts, unresponsive devices when writing more than a couple GB), but it's supposed to be all fixed now. (If I understand correctly some people still see these issues with some workloads, but "in general" it seems to work -- more or less?) https://bugzilla.kernel.org/show_bug.cgi?id=93581 On 03.03.2017 15:39, John McCabe-Dansted wrote: > I haven't got borgbackup to work well on SMR drives. I get hardware > errors and a stale lock file 100GB or so, but I have only tried 1.0.x. > > Maybe I should try again with 1.1.x > > On 3 March 2017 at 06:07, Marian Beermann > wrote: > > On 02.03.2017 11:59, Marian Beermann wrote: > > Did someone try this yet? > > > > From reviews like > > http://www.tomsitpro.com/articles/seagate-8tb-archive-hdd-review,2-822-3.html > > > I take it that the banded nature of a device-managed SMR drive means > > that low-concurrency sequential writes and no overwrites are best for > > these drives -- just like Borg does. > > > > However, I think the 5 MiB segment size (+fsync) used in 1.0.x might be > > very bad for performance here (since the bands are larger than 5 MiB). > > 500 MiB (1.1.x) works maybe better there. > > > > Cheers, Marian > > > > ^- I ordered one now to try it, the Samsung Archive one which is > device-managed SMR. > > We'll see. > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > > > > > > -- > John C. McCabe-Dansted From public at enkore.de Sat Mar 4 12:28:52 2017 From: public at enkore.de (Marian Beermann) Date: Sat, 4 Mar 2017 18:28:52 +0100 Subject: [Borgbackup] Performance of using Borg on SMR drives In-Reply-To: References: Message-ID: On 02.03.2017 11:59, Marian Beermann wrote: > Did someone try this yet? > > From reviews like > http://www.tomsitpro.com/articles/seagate-8tb-archive-hdd-review,2-822-3.html > I take it that the banded nature of a device-managed SMR drive means > that low-concurrency sequential writes and no overwrites are best for > these drives -- just like Borg does. > > However, I think the 5 MiB segment size (+fsync) used in 1.0.x might be > very bad for performance here (since the bands are larger than 5 MiB). > 500 MiB (1.1.x) works maybe better there. > > Cheers, Marian > First results: https://github.com/borgbackup/borg/issues/2252 Looking good. Cheers, Marian From billy at okbecause.com Thu Mar 9 18:43:59 2017 From: billy at okbecause.com (Billy Charlton) Date: Thu, 9 Mar 2017 15:43:59 -0800 Subject: [Borgbackup] Enhancement idea: archive tags Message-ID: After spending so much time in the git universe, I find myself wishing I could apply additional tags to specific borg archives. I know borgbackup is primarily a backup solution and not source control. However, I do find borg's deduplication capabilities really useful in ways it may not have been originally intended! I envision archiving multiple versions of very large files -- sort of like git-lfs or git-fat, but with real deduplication. And then fetching archives using tags such as "latest". This could take form as a new command: `borg tag`: Create a new tag and point it to an existing archive: borg tag [tag-name] [repo::archive-name] Delete a tag from a repo: borg tag -d [tagname] [repo] List all tags and the archives they point to borg tag --list [repo] Is this out of scope for this project? If so, is there perhaps another tool that is well-suited to this use case? -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrian.klaver at aklaver.com Fri Mar 10 10:39:28 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Fri, 10 Mar 2017 07:39:28 -0800 Subject: [Borgbackup] Enhancement idea: archive tags In-Reply-To: References: Message-ID: <6fe82dcf-fbe9-d6f5-8090-7a0d198b7293@aklaver.com> On 03/09/2017 03:43 PM, Billy Charlton wrote: > After spending so much time in the git universe, I find myself wishing I > could apply additional tags to specific borg archives. > > I know borgbackup is primarily a backup solution and not source control. > However, I do find borg's deduplication capabilities really useful in > ways it may not have been originally intended! I envision archiving > multiple versions of very large files -- sort of like git-lfs or > git-fat, but with real deduplication. And then fetching archives using > tags such as "latest". > > This could take form as a new command: `borg tag`: > > Create a new tag and point it to an existing archive: > borg tag [tag-name] [repo::archive-name] > > Delete a tag from a repo: > borg tag -d [tagname] [repo] > > List all tags and the archives they point to > borg tag --list [repo] > > Is this out of scope for this project? If so, is there perhaps another > tool that is well-suited to this use case? I don't have an opinion one way or the other. I do think you should probably submit this as a feature request here: https://github.com/borgbackup/borg/issues My guess is you will get more feedback there. -- Adrian Klaver adrian.klaver at aklaver.com From public at enkore.de Fri Mar 10 10:46:51 2017 From: public at enkore.de (Marian Beermann) Date: Fri, 10 Mar 2017 16:46:51 +0100 Subject: [Borgbackup] Enhancement idea: archive tags In-Reply-To: References: Message-ID: <555aaa6b-64fd-6838-03d9-4cbe19924d29@enkore.de> On 10.03.2017 00:43, Billy Charlton wrote: > After spending so much time in the git universe, I find myself wishing I > could apply additional tags to specific borg archives. > > I know borgbackup is primarily a backup solution and not source control. > However, I do find borg's deduplication capabilities really useful in > ways it may not have been originally intended! I envision archiving > multiple versions of very large files -- sort of like git-lfs or > git-fat, but with real deduplication. And then fetching archives using > tags such as "latest". > > This could take form as a new command: `borg tag`: > > Create a new tag and point it to an existing archive: > borg tag [tag-name] [repo::archive-name] > > Delete a tag from a repo: > borg tag -d [tagname] [repo] > > List all tags and the archives they point to > borg tag --list [repo] > > Is this out of scope for this project? If so, is there perhaps another > tool that is well-suited to this use case? Some previous discussion: https://github.com/borgbackup/borg/issues/846 Cheers, Marian From public at enkore.de Fri Mar 10 10:49:14 2017 From: public at enkore.de (Marian Beermann) Date: Fri, 10 Mar 2017 16:49:14 +0100 Subject: [Borgbackup] Enhancement idea: archive tags In-Reply-To: References: Message-ID: P.S. Since I began using Borg for archival purposes -- which it is really good at, the tag-line "deduplicating archiver" is quite right -- I've put some work into allowing borg-mount to expose archives in a filesystem-like hierarchy, because larger archives (in the abstract sense, not the borg sense) in a flat namespace become really messy to browse. See https://github.com/borgbackup/borg/issues/2263 Cheers, Marian From billy at worldofbilly.com Fri Mar 10 11:06:35 2017 From: billy at worldofbilly.com (Billy Charlton) Date: Fri, 10 Mar 2017 08:06:35 -0800 Subject: [Borgbackup] Enhancement idea: archive tags In-Reply-To: References: Message-ID: Adrian, thanks for the suggestion. I've added to the discussion at https://github.com/borgbackup/borg/issues/846 - Billy On Fri, Mar 10, 2017 at 7:49 AM, Marian Beermann wrote: > P.S. > > Since I began using Borg for archival purposes -- which it is really > good at, the tag-line "deduplicating archiver" is quite right -- I've > put some work into allowing borg-mount to > expose archives in a filesystem-like hierarchy, because larger archives > (in the abstract sense, not the borg sense) in a flat namespace become > really messy to browse. > > See https://github.com/borgbackup/borg/issues/2263 > > Cheers, Marian > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurice.libes at osupytheas.fr Mon Mar 13 06:02:32 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Mon, 13 Mar 2017 11:02:32 +0100 Subject: [Borgbackup] borg web interface and backup checking Message-ID: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> hi to all I have some difficulties in understanding how the borg web interface works I hesitate to test it because I read that the web interface is to put on the borg client side? (https://pypi.python.org/pypi/borgweb) or I am wrong? If it is the case , it means that we can not have a synoptic web vision of the whole backuped PCs (like backuppc does) ? let's be clear I am impressed by the performance of "borg" in term of backup and as I am deploying it on many pc of my labs, I was asking to me how to check the logs of the daily backup of many pc at a glance thanks for information ML -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 From tw at waldmann-edv.de Mon Mar 13 09:14:45 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 13 Mar 2017 14:14:45 +0100 Subject: [Borgbackup] borg web interface and backup checking In-Reply-To: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> References: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> Message-ID: > I have some difficulties in understanding how the borg web interface works > I hesitate to test it because I read that the web interface is to put > on the borg client side? > (https://pypi.python.org/pypi/borgweb) borgweb is a web-based log viewer and "start button" for the borg client. a typical use case would be a small company having a server that needs to get backed up in a rather ad-hoc (not: fixed time) fashion and where some non-administrator checks the logs. then you could have borg and borgweb on that server, target of backups could be a usb disk. backups would be triggered by clicking on that start button, logs could be viewed via the web interface. > If it is the case , it means that we can not have a synoptic web vision > of the whole backuped PCs (like backuppc does) ? there is no host selection yet in borgweb. > to check the logs of the daily backup of many pc at a glance If there are many hosts, maybe manually checking is not the way to go anyway. Maybe rather check the exit code(s) and send an ok mail if all 0 and a warning / error mail if 1 or 2. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Mon Mar 13 12:13:37 2017 From: public at enkore.de (Marian Beermann) Date: Mon, 13 Mar 2017 17:13:37 +0100 Subject: [Borgbackup] borg web interface and backup checking In-Reply-To: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> References: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> Message-ID: On 13.03.2017 11:02, Maurice Libes wrote: > hi to all > > I have some difficulties in understanding how the borg web interface works > I hesitate to test it because I read that the web interface is to put > on the borg client side? > (https://pypi.python.org/pypi/borgweb) > > or I am wrong? > > If it is the case , it means that we can not have a synoptic web vision > of the whole backuped PCs (like backuppc does) ? > > > let's be clear I am impressed by the performance of "borg" in term of > backup > and as I am deploying it on many pc of my labs, I was asking to me how > to check the logs > of the daily backup of many pc at a glance > > thanks for information > > ML > It sounds like you would be looking for BorgCube ( https://github.com/enkore/borgcube ), which is "[A network] Backup system built on Borg Backup". Analogy: BorgCube is to Borg like GitHub/GitLab is to Git. NOTE: BorgCube is in NO WAY ready for production use and won't be for some time to come, since I am mostly diverting my resources towards Borg 1.1.x at this time. There are about ~two weeks of unpublished work on BorgCube, though. Cheers, Marian P.S. More broadly speaking, Borg 1.0.x was not well suited to be integrated into some sort of "frontend" to a full-fledged backup application. We put a lot of work into Borg 1.1.x (beta) to make this easier for application developers, so there's some hope that Borg is integrated into more (graphical) backup applications. The _preliminary_ beta docs for that are at http://borgbackup.readthedocs.io/en/latest/internals/frontends.html and the meta-issue is https://github.com/borgbackup/borg/issues/654 From maurice.libes at osupytheas.fr Mon Mar 13 12:28:39 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Mon, 13 Mar 2017 17:28:39 +0100 Subject: [Borgbackup] borg web interface and backup checking In-Reply-To: References: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> Message-ID: <8d48d034-9680-54df-099f-5b83960da7d1@osupytheas.fr> Le 13/03/2017 ? 14:14, Thomas Waldmann a ?crit : >> I have some difficulties in understanding how the borg web interface works >> I hesitate to test it because I read that the web interface is to put >> on the borg client side? >> (https://pypi.python.org/pypi/borgweb) > borgweb is a web-based log viewer and "start button" for the borg client. > > a typical use case would be a small company having a server that needs > to get backed up in a rather ad-hoc (not: fixed time) fashion and where > some non-administrator checks the logs. ah ok it's not my case :-) I am in an opposite case, where a sysadmin (me) have installed and launched the borg backup of "n" clients PC with a cron script ..; It works well but now everyday I have to check the logs to see if the backup had succeeded well on the client PC , and make some "borg list" and "borg info" commands to check if the backup have succeeded So now I have "n" repository for "n" PC and I had a dream in which a web interface was displaying in green the succeeded backup with the "borg info" values and in red if there was some problems > then you could have borg and borgweb on that server, target of backups > could be a usb disk. backups would be triggered by clicking on that > start button, logs could be viewed via the web interface. > >> If it is the case , it means that we can not have a synoptic web vision >> of the whole backuped PCs (like backuppc does) ? > there is no host selection yet in borgweb. > >> to check the logs of the daily backup of many pc at a glance > If there are many hosts, maybe manually checking is not the way to go > anyway. Maybe rather check the exit code(s) and send an ok mail if all 0 > and a warning / error mail if 1 or 2. > yes I will have to write some script to gather the informations and send them to me by mail nobody on the list have played with such a script? ML -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 From public at enkore.de Mon Mar 13 12:33:18 2017 From: public at enkore.de (Marian Beermann) Date: Mon, 13 Mar 2017 17:33:18 +0100 Subject: [Borgbackup] borg web interface and backup checking In-Reply-To: <8d48d034-9680-54df-099f-5b83960da7d1@osupytheas.fr> References: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> <8d48d034-9680-54df-099f-5b83960da7d1@osupytheas.fr> Message-ID: <34380bcc-e8ee-0d84-e12f-5976e666a953@enkore.de> Automation software might help with this (e.g. Jenkins, Salt Ansible and so on) Cheers, Marian On 13.03.2017 17:28, Maurice Libes wrote: > > > Le 13/03/2017 ? 14:14, Thomas Waldmann a ?crit : >>> I have some difficulties in understanding how the borg web interface >>> works >>> I hesitate to test it because I read that the web interface is to put >>> on the borg client side? >>> (https://pypi.python.org/pypi/borgweb) >> borgweb is a web-based log viewer and "start button" for the borg client. >> >> a typical use case would be a small company having a server that needs >> to get backed up in a rather ad-hoc (not: fixed time) fashion and where >> some non-administrator checks the logs. > ah ok it's not my case :-) > > I am in an opposite case, where a sysadmin (me) have installed and > launched the borg backup of "n" clients PC with a cron script ..; It > works well > but now everyday I have to check the logs to see if the backup had > succeeded well on the client PC , and make some "borg list" and "borg > info" commands to check if the backup have succeeded > > So now I have "n" repository for "n" PC and I had a dream in which a web > interface > was displaying in green the succeeded backup with the "borg info" values > and in red if there was some problems > >> then you could have borg and borgweb on that server, target of backups >> could be a usb disk. backups would be triggered by clicking on that >> start button, logs could be viewed via the web interface. >> >>> If it is the case , it means that we can not have a synoptic web vision >>> of the whole backuped PCs (like backuppc does) ? >> there is no host selection yet in borgweb. >> >>> to check the logs of the daily backup of many pc at a glance >> If there are many hosts, maybe manually checking is not the way to go >> anyway. Maybe rather check the exit code(s) and send an ok mail if all 0 >> and a warning / error mail if 1 or 2. >> > yes I will have to write some script to gather the informations and send > them to me by mail > > nobody on the list have played with such a script? > > ML > From maurice.libes at osupytheas.fr Mon Mar 13 12:37:38 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Mon, 13 Mar 2017 17:37:38 +0100 Subject: [Borgbackup] borg web interface and backup checking In-Reply-To: References: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> Message-ID: <061f407a-44c9-f202-c700-cd809fda92a3@osupytheas.fr> Le 13/03/2017 ? 17:13, Marian Beermann a ?crit : > On 13.03.2017 11:02, Maurice Libes wrote: >> hi to all >> >> I have some difficulties in understanding how the borg web interface works >> I hesitate to test it because I read that the web interface is to put >> on the borg client side? >> (https://pypi.python.org/pypi/borgweb) >> >> or I am wrong? >> >> If it is the case , it means that we can not have a synoptic web vision >> of the whole backuped PCs (like backuppc does) ? >> >> >> let's be clear I am impressed by the performance of "borg" in term of >> backup >> and as I am deploying it on many pc of my labs, I was asking to me how >> to check the logs >> of the daily backup of many pc at a glance >> >> thanks for information >> >> ML >> > It sounds like you would be looking for BorgCube ( > https://github.com/enkore/borgcube ), which is "[A network] Backup > system built on Borg Backup". Analogy: BorgCube is to Borg like > GitHub/GitLab is to Git. > > NOTE: BorgCube is in NO WAY ready for production use and won't be for > some time to come, since I am mostly diverting my resources towards Borg > 1.1.x at this time. There are about ~two weeks of unpublished work on > BorgCube, though. > > Cheers, Marian > > P.S. > > More broadly speaking, Borg 1.0.x was not well suited to be integrated > into some sort of "frontend" to a full-fledged backup application. > > We put a lot of work into Borg 1.1.x (beta) to make this easier for > application developers, so there's some hope that Borg is integrated > into more (graphical) backup applications. ok understood take your time, but it will surely be a important direction to add some functionalities to your wonderful product waiting for this piece of software, a good way could be to write some bash or perl script to gather and send the backup information by mail to a sysadmin and make the life easiest may be a working group could try to write such a pieces of perl script? me If I have time ;-) ML > > The _preliminary_ beta docs for that are at > http://borgbackup.readthedocs.io/en/latest/internals/frontends.html and > the meta-issue is https://github.com/borgbackup/borg/issues/654 > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 From felix.schwarz at oss.schwarz.eu Mon Mar 13 12:47:09 2017 From: felix.schwarz at oss.schwarz.eu (Felix Schwarz) Date: Mon, 13 Mar 2017 17:47:09 +0100 Subject: [Borgbackup] borg web interface and backup checking In-Reply-To: <8d48d034-9680-54df-099f-5b83960da7d1@osupytheas.fr> References: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> <8d48d034-9680-54df-099f-5b83960da7d1@osupytheas.fr> Message-ID: <1a5716f2-d8be-dc75-d01a-0caf5d592371@oss.schwarz.eu> Am 13.03.2017 um 17:28 schrieb Maurice Libes: > So now I have "n" repository for "n" PC and I had a dream in which a web > interface > was displaying in green the succeeded backup with the "borg info" values > and in red if there was some problems (...) > yes I will have to write some script to gather the informations and send > them to me by mail To me that sounds an aweful lot like the regular monitoring of a "service" which each sysadmin should do. As a first step you can ensure that the period borg backup does not output anything if successful when run from cron. That way each borg error will cause an error email which you will notice. Of course this relies on having a working email delivery (otherwise your "notification" is broken). A second step is a monitoring agent (e.g. integrated in nagios, icinga, ...) to query the date of the latest borg backup. Monitoring software usually has a "dashboard" with red and green lights. However relying on the "dashboard" to notify errors is imho not avisable (pull vs. push notifications) because it is overlooked far too easily. Felix From maurice.libes at osupytheas.fr Mon Mar 13 13:56:50 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Mon, 13 Mar 2017 18:56:50 +0100 Subject: [Borgbackup] borg web interface and backup checking In-Reply-To: <1a5716f2-d8be-dc75-d01a-0caf5d592371@oss.schwarz.eu> References: <2cbfb98a-4670-e972-5fcc-a943b8dc2b97@osupytheas.fr> <8d48d034-9680-54df-099f-5b83960da7d1@osupytheas.fr> <1a5716f2-d8be-dc75-d01a-0caf5d592371@oss.schwarz.eu> Message-ID: <9433a179-8b6c-eeb8-1240-d0f57122f49a@osupytheas.fr> Le 13/03/2017 ? 17:47, Felix Schwarz a ?crit : > Am 13.03.2017 um 17:28 schrieb Maurice Libes: >> So now I have "n" repository for "n" PC and I had a dream in which a web >> interface >> was displaying in green the succeeded backup with the "borg info" values >> and in red if there was some problems > (...) >> yes I will have to write some script to gather the informations and send >> them to me by mail > To me that sounds an aweful lot like the regular monitoring of a "service" > which each sysadmin should do. > As a first step you can ensure that the period borg backup does not output > anything if successful when run from cron. That way each borg error will cause > an error email which you will notice. > > Of course this relies on having a working email delivery (otherwise your > "notification" is broken). > > A second step is a monitoring agent (e.g. integrated in nagios, icinga, ...) > to query the date of the latest borg backup. > > Monitoring software usually has a "dashboard" with red and green lights. > > However relying on the "dashboard" to notify errors is imho not avisable (pull > vs. push notifications) because it is overlooked far too easily. yes you're perfectly right, such monitoring tools (icinga, zabbix etc) can do the job but in a first approach I was thinking of a dashboard integrated to Borg server side, like the one of backuppc (may be the better functionality of backuppc) in which you can check the backup of differents host on a single web page with age old geeks love web interfaces ;-) but I realize that all the information comes from the client side and such a dashboard is surely hard to do on the server side ? (don't know what is kept on server side) for the moment I just added a trap command which send me a mail in case of [ $? -ne 0 ] trap '[ "$?" -eq 0 ] || send_tosysadmin ' EXIT send_tosysadmin() { message="[Borg] $hostname : probleme sur sauvegarde Borg du PC $ipadr"; to_logfile $message echo $message | mutt -s "$message" $tosysadmin } ML > Felix > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 From jon77h at gmail.com Thu Mar 16 08:45:13 2017 From: jon77h at gmail.com (jon h) Date: Thu, 16 Mar 2017 22:45:13 +1000 Subject: [Borgbackup] Cannot find borg log file Message-ID: Hi I've made my first borg backup, after studying the documentation for a few hours. [https://borgbackup.readthedocs.io/en/stable/...] But now I can't find the log file. It's not in /var/log, nor in /root anywhere. More searching the docs and googling reveal nothing about where the log file should be. Here are the commands I used: [jh at SSD1 data]$ export TMPDIR='/mnt/zp0/ztemp' [jh at SSD1 data]$ export BORG_REPO='/mnt/zp0/borg/repo' [jh at SSD1 data]$ export BORG_CACHE_DIR='/mnt/zp0/borg/cache' [jh at SSD1 data]$ export BORG_FILES_CACHE_TTL=100 [jh at SSD1 data]$ borg init --encryption=none $BORG_REPO [root at SSD1 ~]# export TMPDIR='/mnt/zp0/ztemp' [root at SSD1 ~]# export BORG_REPO='/mnt/zp0/borg/repo' [root at SSD1 ~]# export BORG_CACHE_DIR='/mnt/zp0/borg/cache' [root at SSD1 ~]# export BORG_FILES_CACHE_TTL=100 [root at SSD1 ~]# export EXCLUDE_FILE='/mnt/data/---/exclude.txt' [root at SSD1 ~]# borg create /mnt/zp0/borg/repo::manjaro_all_170316T2010 / --numeric-owner --one-file-system --stats --progress --show-rc --exclude-from $EXCLUDE_FILE --chunker-params=17,23,20,4095 There was a progress line while it was running, but no stats when it finished. I thought the lack of a log file might be caused by lack of errors, so i tried 'check' with 'info' level : [root at SSD1 ~]# borg check --info --show-rc Starting repository check Completed repository check, no problems found. Starting archive consistency check... Analyzing archive manjaro_all_170316T2010 (1/1) Archive consistency check complete, no problems found. terminating with success status, rc 0 At least there's some output this time, including rc, but still no log file in any place that's obvious to me. It's surprising that the docs don't mention the log location. Feature request: allow us to set BORG_LOG_DIR in the same way that we can set BORG_CACHE_DIR. Any help will be appreciated. -- jon From public at enkore.de Thu Mar 16 08:50:48 2017 From: public at enkore.de (Marian Beermann) Date: Thu, 16 Mar 2017 13:50:48 +0100 Subject: [Borgbackup] Cannot find borg log file In-Reply-To: References: Message-ID: <78fc8618-6a96-de0f-cf28-a846a3645dbc@enkore.de> Hi Jon, Borg doesn't write a log file on it's own accord. I think you may just be missing a -v/--verbose here to see all output you need. Cheers, Marian From dsjstc at gmail.com Sat Mar 18 22:29:57 2017 From: dsjstc at gmail.com (DS Jstc) Date: Sat, 18 Mar 2017 19:29:57 -0700 Subject: [Borgbackup] Borg on Optware Message-ID: <734ae98f-ac12-7678-b1a1-588d4536ec5f@gmail.com> Has anyone here had success building Borg for an OpenWRT router, or other Optware platform? To be clear here, I'm not trying to back up borg archives *to* the router, that's trivially done with rsync. I actually want the router to run `borg mount` locally -- so the router can mount and export my media file archive, for times when my workstation isn't running. From borgbackup at equaeghe.nospammail.net Mon Mar 20 17:53:20 2017 From: borgbackup at equaeghe.nospammail.net (Erik Quaeghebeur) Date: Mon, 20 Mar 2017 22:53:20 +0100 Subject: [Borgbackup] Delete files from archive Message-ID: <1490046800.3241598.917729504.02B7EE5C@webmail.messagingengine.com> Dear List, is it possible to delete files from an archive? Or, more specifically, what is the best approach to deal with a forgotten --exclude? Best, Erik From tw at waldmann-edv.de Mon Mar 20 18:16:15 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 20 Mar 2017 23:16:15 +0100 Subject: [Borgbackup] Delete files from archive In-Reply-To: <1490046800.3241598.917729504.02B7EE5C@webmail.messagingengine.com> References: <1490046800.3241598.917729504.02B7EE5C@webmail.messagingengine.com> Message-ID: <84d5e404-a9ad-007d-88b2-d14156e50c7d@waldmann-edv.de> > is it possible to delete files from an archive? Or, more specifically, > what is the best approach to deal with a forgotten --exclude? When doing the first backup(s) interactively, I usually use --list so I see what's getting backed up. Often I already notice some missing excludes. I may also redirect the output to a file, so I can go over it afterwards and extract information from it to tune my excludes. Depending on how long it would take to backup the stuff I would rather like excluded, I can either decide to continue the backup run or cancel it via Ctrl-C. I do NOT delete the cancelled / unwanted archive(s) directly afterwards, but just fix my excludes and then run the backup again. It will work faster because a lot of chunks are already in the repo from the first archive (either via the completed archive or via a .checkpoint archive). I repeat this until I am happy with the excludes, then I remove all the incomplete archives (or let prune do it). That doesn't help if you already have a lot of archives and want to remove stuff later from them. borg 1.1 will have recreate for this. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From maurice.libes at osupytheas.fr Tue Mar 21 06:33:30 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Tue, 21 Mar 2017 11:33:30 +0100 Subject: [Borgbackup] ip vs name of the borg server? Message-ID: hi to all I don't know how to solve this little problem below I don't know when and why it seems that I have initialized a borg repository with a numeric address IP of the borg server, and when I launch a backup with the IP name of the same server, I have this warning below and I must answer y/n how to solve this confusion? It sounds like a trivial annoyment but i have'nt found the solution thanks ML 2017-03-21 00:03:01 %%%%%%%%%% Starting new backup 2017-03-21....%%%%%%%%%% 2017-03-21 00:03:01 Pushing archive borg at _/*my.num.adr.ip*/_:/mnt/provigo-borg/sauve-pcbaklouti::baklouti-homes-2017-03-21 Warning: The repository at location ssh://borg at _/*my.num.adr.ip*/_/mnt/provigo-borg/sauve-pcbaklouti was previously located at ssh://borg@/*myborg.serveur.univ.fr*//mnt/provigo-borg/sauve-pcbaklouti Do you want to continue? [yN] Aborting. -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 -------------- next part -------------- An HTML attachment was scrubbed... URL: From luc at spaceroots.org Tue Mar 21 07:34:21 2017 From: luc at spaceroots.org (Luc Maisonobe) Date: Tue, 21 Mar 2017 12:34:21 +0100 Subject: [Borgbackup] ip vs name of the borg server? In-Reply-To: References: Message-ID: <5d35da95-8bdc-89ba-04b1-f3c6eb6cf14f@spaceroots.org> Le 21/03/2017 ? 11:33, Maurice Libes a ?crit : > hi to all > > I don't know how to solve this little problem below > > I don't know when and why it seems that I have initialized a borg > repository with a numeric address IP of the borg server, > > and when I launch a backup with the IP name of the same server, I have > this warning below and I must answer y/n > > how to solve this confusion? It sounds like a trivial annoyment but i > have'nt found the solution Hi Maurice, I think you should edit the file ~/.cache.borg//config on the client host. The previous location appears here. The hexadecimal number is the id of the repository, it appears both as the directory name and as an entry in the config file. As far as I know, it is only a warning, and if you answer yes, this file will be automatically edited for you. best regards, Luc > > thanks > > ML > > > > 2017-03-21 00:03:01 %%%%%%%%%% Starting new backup 2017-03-21....%%%%%%%%%% > 2017-03-21 00:03:01 Pushing archive > borg at _/*my.num.adr.ip*/_:/mnt/provigo-borg/sauve-pcbaklouti::baklouti-homes-2017-03-21 > Warning: The repository at location > ssh://borg at _/*my.num.adr.ip*/_/mnt/provigo-borg/sauve-pcbaklouti was > previously located at > ssh://borg@/*myborg.serveur.univ.fr*//mnt/provigo-borg/sauve-pcbaklouti > Do you want to continue? [yN] Aborting. > > > -- > M. LIBES > Service Informatique OSU Pytheas - UMS 3470 CNRS > Batiment Oceanomed > Campus de Luminy > 13288 Marseille cedex 9 > Tel: 04860 90529 > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From gmatht at gmail.com Wed Mar 22 11:56:34 2017 From: gmatht at gmail.com (John McCabe-Dansted) Date: Wed, 22 Mar 2017 23:56:34 +0800 Subject: [Borgbackup] What don't you like about Borg? In-Reply-To: References: <87insnrre5.fsf@angela.anarc.at> <25ad8422-919a-f5f4-bf42-1f012f5a1c35@enkore.de> <26bb7618-6e2e-16ab-50aa-22fcae66ede7@enkore.de> Message-ID: On 28 October 2016 at 16:30, John McCabe-Dansted wrote: > 3. No Partclone integration for storing sparse disk images. > BTW, I have made a trival prototype utility `unrescue` so that I can pipe zero filled (not sparse) disk images directly into borg. So for example, if you want to store the allocated blocks in the raw disk image /tmp/foo.img you could do the following: wget https://raw.githubusercontent.com/gmatht/joshell/master/c/unrescue.c gcc unrescue.c -o unrescue borg init /tmp/unrescue_test sudo partclone.ext4 -D -s /tmp/foo.img -L /tmp/junk -o - | grep ^0x.*0x | ./unrescue 3< /tmp/foo.img | borg create -s -C lz4 /tmp/unrescue_test::foo - This isn't as fast as it could be, borg can only compress the zeros at 100MBs on my i7-2620M. Skipping over the zeros like usual partclone is clearly much faster. When borg supports querying sparse files using SEEK_DATA and SEEK_HOLE (#14) this could perhaps be fixed by presenting borg with a FUSE file system using the recently added SEEK_DATA support in fuse. In addition to space savings over conventional gzipped partclone files this is also just a normal image file, so you aren't limited by the partclone format. For example, mounting the backed up partition: sudo borg mount /tmp/unrescue_test /mnt/tmp sudo mount /mnt/tmp/foo/stdin /mnt/tmp2 Or resizing the backed up partition in place: git clone https://github.com/vi/fusecow.git cd fusecow; make touch /tmp/foo.mountpoint_file sudo ./fusecow /mnt/tmp/foo/stdin /tmp/foo.mountpoint_file /tmp/foo.write_file sudo e2fsck -f /tmp/foo.mountpoint_file sudo resize2fs /tmp/foo.mountpoint_file 100M dd if=/tmp/foo.mountpoint_file of=/dev/smaller_dev1 However, random access seems to be a bit slow. For example, I have found that doing an `ls` on a NTFS image stored in borg pegs a Core2Duo CPU at 100% for about 0.5s per file in the directory. -- John C. McCabe-Dansted -------------- next part -------------- An HTML attachment was scrubbed... URL: From tr.ml at gmx.de Thu Mar 23 12:15:58 2017 From: tr.ml at gmx.de (Rainer Traut) Date: Thu, 23 Mar 2017 17:15:58 +0100 Subject: [Borgbackup] logging problem on RHEL6 Message-ID: <777d946d-ee96-2582-685c-5ce11be9e8a9@gmx.de> Hi, running borg 1.0.10 on RHEL6.9 - it seems to work w/o problem although glibc is older... (glibc-2.12-1.209.el6.x86_64) However logging seems to be a problem when creating backups: --list does nothing: # borg create --list --compression lzma ::{now:%y-%m-%d-%H%M%S}-xyz3 /srv/backup/xyz3 # --verbose does nothing: # borg create --verbose --compression lzma ::{now:%y-%m-%d-%H%M%S}-xyz3 /srv/backup/xyz3 # --progress lists the files in one line, then eats this line: # borg create --progress --compression lzma ::{now:%y-%m-%d-%H%M%S}-xyz3 /srv/backup/xyz3 # --stats does nothing: # borg create --stats --compression lzma ::{now:%y-%m-%d-%H%M%S}-xyz3 /srv/backup/netplan3 # All these archives are created fine. Combining these options gives the expextec output, but... I'd like to see only errors and the summary of a backup run. Is this a special RHEL6 problem? Thx Rainer From tw at waldmann-edv.de Thu Mar 23 16:28:01 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 23 Mar 2017 21:28:01 +0100 Subject: [Borgbackup] logging problem on RHEL6 In-Reply-To: <777d946d-ee96-2582-685c-5ce11be9e8a9@gmx.de> References: <777d946d-ee96-2582-685c-5ce11be9e8a9@gmx.de> Message-ID: <08d08d1d-4884-0106-dc0b-bded85d6db5a@waldmann-edv.de> > running borg 1.0.10 on RHEL6.9 - it seems to work w/o problem although > glibc is older... (glibc-2.12-1.209.el6.x86_64) I now and then tested on centos6 and it (mostly) worked ok. There was one strange problem, though: if I generated a borg binary on centos6, it was somehow slower than one from debian7. I could not find out why... About your output issues: --verbose (aka --info) is somehow the "main switch" as it sets the logging level to INFO. If you don't set it to info, you will only see ERROR level output (or stuff that is just print()ed, not logged). --list and --stats enable special kinds of output, you need them additionally to --verbose. > I'd like to see only errors and the summary of a backup run. Guess it is --verbose --stats --show-rc then. > Is this a special RHEL6 problem? No, in borg 1.0 it is like that. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From tw at waldmann-edv.de Mon Mar 27 18:27:56 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 28 Mar 2017 00:27:56 +0200 Subject: [Borgbackup] borgbackup beta 1.1.0b4 released Message-ID: <0bc186ee-112d-32c5-6771-9ee8d7679d86@waldmann-edv.de> https://github.com/borgbackup/borg/releases/tag/1.1.0b4 More details: see URL above. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From maurice.libes at osupytheas.fr Tue Mar 28 04:28:02 2017 From: maurice.libes at osupytheas.fr (Maurice Libes) Date: Tue, 28 Mar 2017 10:28:02 +0200 Subject: [Borgbackup] prune .checkpoint ? Message-ID: <787bb2a5-f164-866b-df12-08da520bef77@osupytheas.fr> hello to all it seems that .checkpoint archives are not concerned by the prune algorithm? have we to delete them by hand ? I understood that .checkpoint archives are created upon a problem, and we have to consider them as "normal" (but incomplete) archives in that they contribute to the deduplication your advices please M borg list -v borg at myserver.fr:/mnt/provigo-borg/sauve-bioinfo bioinfo-2017-03-08.checkpoint Wed, 2017-03-08 17:51:09 bioinfo-2017-03-09.checkpoint Thu, 2017-03-09 01:00:23 bioinfo-2017-03-12 Sun, 2017-03-12 01:00:15 bioinfo-2017-03-16.checkpoint Thu, 2017-03-16 01:00:25 bioinfo-2017-03-19 Sun, 2017-03-19 02:20:13 bioinfo-2017-03-22 Wed, 2017-03-22 02:20:42 bioinfo-2017-03-23 Thu, 2017-03-23 02:20:08 bioinfo-2017-03-24 Fri, 2017-03-24 02:20:48 bioinfo-2017-03-25 Sat, 2017-03-25 02:20:08 bioinfo-2017-03-26 Sun, 2017-03-26 03:00:31 bioinfo-2017-03-27 Mon, 2017-03-27 02:20:12 bioinfo-2017-03-28 Tue, 2017-03-28 02:20:35 root at bioinfo:~/BORG# /usr/bin/borg prune -v --list --info --dry-run --keep-daily=7 borg at myserver.fr:/mnt/provigo-borg/sauve-bioinfo --prefix bioinfo Keeping archive: bioinfo-2017-03-28 Tue, 2017-03-28 02:20:35 Keeping archive: bioinfo-2017-03-27 Mon, 2017-03-27 02:20:12 Keeping archive: bioinfo-2017-03-26 Sun, 2017-03-26 03:00:31 Keeping archive: bioinfo-2017-03-25 Sat, 2017-03-25 02:20:08 Keeping archive: bioinfo-2017-03-24 Fri, 2017-03-24 02:20:48 Keeping archive: bioinfo-2017-03-23 Thu, 2017-03-23 02:20:08 Keeping archive: bioinfo-2017-03-22 Wed, 2017-03-22 02:20:42 Would prune: bioinfo-2017-03-19 Sun, 2017-03-19 02:20:13 Would prune: bioinfo-2017-03-12 Sun, 2017-03-12 01:00:15 root at bioinfo:~/BORG# -- M. LIBES Service Informatique OSU Pytheas - UMS 3470 CNRS Batiment Oceanomed Campus de Luminy 13288 Marseille cedex 9 Tel: 04860 90529 From tw at waldmann-edv.de Tue Mar 28 13:02:42 2017 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 28 Mar 2017 19:02:42 +0200 Subject: [Borgbackup] prune .checkpoint ? In-Reply-To: <787bb2a5-f164-866b-df12-08da520bef77@osupytheas.fr> References: <787bb2a5-f164-866b-df12-08da520bef77@osupytheas.fr> Message-ID: <558a76cd-c9d0-ee2b-ec0b-9da87261b906@waldmann-edv.de> > it seems that .checkpoint archives are not concerned by the prune > algorithm? have we to delete them by hand ? I added code to deal with them a while ago (but maybe it was in master branch / will be in 1.1 some day). So, while you see them, delete them manually (or ignore them, they usually don't take up much space due to dedup, so they are a rather cosmetic issue). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From david_martin at fastmail.com Thu Mar 30 01:57:25 2017 From: david_martin at fastmail.com (David Martin) Date: Thu, 30 Mar 2017 16:57:25 +1100 Subject: [Borgbackup] 'borg create' fails with whitespaces in directory names [OS X] Message-ID: <1490853445.1226750.928243408.67690F3B@webmail.messagingengine.com> Good day, I'm trying a first-time setup of borg and it chokes on whitespaces in my directories. A minimal example: > BORG_PASSPHRASE="$PASS" borg create --dry-run -v --stats \ > $REPOSITORY::'{hostname}-{now:%Y-%m-%d}' \ > 'Users/davidm/Dev/test_repo/test dir' I'm getting the following error: > Users/davidm/Dev/test_repo/test dir: [Errno 2] No such file or directory: 'Users/davidm/Dev/test_repo/test dir' Trying to escape the whitespace does not help. For example with 'test\ dir': > Users/davidm/Dev/test_repo/test\ dir: [Errno 2] No such file or directory: 'Users/davidm/Dev/test_repo/test\\ dir' My setup: OSX 10.11.6 borg 1.0.10 installed from homebrew > $ brew cask info borgbackup > borgbackup: 1.0.10 > /opt/homebrew-cask/Caskroom/borgbackup/1.0.10 (5.4MB) > From: https://github.com/caskroom/homebrew-cask/blob/master/Casks/borgbackup.rb > $ python3 --version > Python 3.6.1 > $ bash --version > GNU bash, version 4.4.12(1)-release (x86_64-apple-darwin15.6.0) Am I missing something or not escaping whitespaces correctly? Otherwise I'm fairly familiar with Python and I would be happy to have a closer look. Is there a simple way to run for example 'borg create' from the sources? Thanks and regards, David From tr.ml at gmx.de Thu Mar 30 04:50:45 2017 From: tr.ml at gmx.de (Rainer Traut) Date: Thu, 30 Mar 2017 10:50:45 +0200 Subject: [Borgbackup] logging problem on RHEL6 In-Reply-To: <08d08d1d-4884-0106-dc0b-bded85d6db5a@waldmann-edv.de> References: <777d946d-ee96-2582-685c-5ce11be9e8a9@gmx.de> <08d08d1d-4884-0106-dc0b-bded85d6db5a@waldmann-edv.de> Message-ID: <255de25a-2b50-16b4-96c7-87353263e762@gmx.de> Hi Thomas, Am 23.03.2017 um 21:28 schrieb Thomas Waldmann: >> running borg 1.0.10 on RHEL6.9 - it seems to work w/o problem although >> glibc is older... (glibc-2.12-1.209.el6.x86_64) > > I now and then tested on centos6 and it (mostly) worked ok. > There was one strange problem, though: if I generated a borg binary on > centos6, it was somehow slower than one from debian7. I could not find > out why... Maybe a problem with lzma on RHEL6? lzma: Time (start): Fri, 2017-03-24 00:20:04 Time (end): Fri, 2017-03-24 04:54:44 lz4: Time (start): Thu, 2017-03-30 00:20:04 Time (end): Thu, 2017-03-30 01:45:32 >> I'd like to see only errors and the summary of a backup run. > > Guess it is --verbose --stats --show-rc then. borg check --show-rc does not seem to print the line: "terminating with success status, rc 0" at least in my cron job. ;) Thx Rainer From public at enkore.de Thu Mar 30 04:55:21 2017 From: public at enkore.de (Marian Beermann) Date: Thu, 30 Mar 2017 10:55:21 +0200 Subject: [Borgbackup] logging problem on RHEL6 In-Reply-To: <255de25a-2b50-16b4-96c7-87353263e762@gmx.de> References: <777d946d-ee96-2582-685c-5ce11be9e8a9@gmx.de> <08d08d1d-4884-0106-dc0b-bded85d6db5a@waldmann-edv.de> <255de25a-2b50-16b4-96c7-87353263e762@gmx.de> Message-ID: <628fa98e-95f3-890f-ecd0-87034235e41f@enkore.de> On 30.03.2017 10:50, Rainer Traut wrote: > Hi Thomas, > > Am 23.03.2017 um 21:28 schrieb Thomas Waldmann: >>> running borg 1.0.10 on RHEL6.9 - it seems to work w/o problem although >>> glibc is older... (glibc-2.12-1.209.el6.x86_64) >> >> I now and then tested on centos6 and it (mostly) worked ok. >> There was one strange problem, though: if I generated a borg binary on >> centos6, it was somehow slower than one from debian7. I could not find >> out why... > > Maybe a problem with lzma on RHEL6? > > lzma: > Time (start): Fri, 2017-03-24 00:20:04 > Time (end): Fri, 2017-03-24 04:54:44 > > lz4: > Time (start): Thu, 2017-03-30 00:20:04 > Time (end): Thu, 2017-03-30 01:45:32 > Try comparing the versions of the software used to build it, e.g. Python version, liblzma version. Newer Python releases are often a bit faster than previous ones, and newer liblzma releases may have better performance as well, or use newer ISA extensions. Cheers, Marian From adrian.klaver at aklaver.com Thu Mar 30 11:16:46 2017 From: adrian.klaver at aklaver.com (Adrian Klaver) Date: Thu, 30 Mar 2017 08:16:46 -0700 Subject: [Borgbackup] 'borg create' fails with whitespaces in directory names [OS X] In-Reply-To: <1490853445.1226750.928243408.67690F3B@webmail.messagingengine.com> References: <1490853445.1226750.928243408.67690F3B@webmail.messagingengine.com> Message-ID: On 03/29/2017 10:57 PM, David Martin wrote: > Good day, > > I'm trying a first-time setup of borg and it chokes on whitespaces in my > directories. A minimal example: > >> BORG_PASSPHRASE="$PASS" borg create --dry-run -v --stats \ >> $REPOSITORY::'{hostname}-{now:%Y-%m-%d}' \ > > 'Users/davidm/Dev/test_repo/test dir' > > I'm getting the following error: >> Users/davidm/Dev/test_repo/test dir: [Errno 2] No such file or directory: 'Users/davidm/Dev/test_repo/test dir' > > Trying to escape the whitespace does not help. For example with 'test\ > dir': >> Users/davidm/Dev/test_repo/test\ dir: [Errno 2] No such file or directory: 'Users/davidm/Dev/test_repo/test\\ dir' Not sure it is the whitespace: aklaver at tito:~> borg_new create --dry-run -v --stats borg_test::test_run 'home/aklaver/test space/' home/aklaver/test space: [Errno 2] No such file or directory: 'home/aklaver/test space' aklaver at tito:~> borg_new create --dry-run -v --stats borg_test::test_run '/home/aklaver/test space/' > > My setup: > OSX 10.11.6 > borg 1.0.10 installed from homebrew > >> $ brew cask info borgbackup >> borgbackup: 1.0.10 >> /opt/homebrew-cask/Caskroom/borgbackup/1.0.10 (5.4MB) >> From: https://github.com/caskroom/homebrew-cask/blob/master/Casks/borgbackup.rb > >> $ python3 --version >> Python 3.6.1 >> $ bash --version >> GNU bash, version 4.4.12(1)-release (x86_64-apple-darwin15.6.0) > > Am I missing something or not escaping whitespaces correctly? > Otherwise I'm fairly familiar with Python and I would be happy to have a > closer look. Is there a simple way to run for example 'borg create' from > the sources? > > Thanks and regards, > > David > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -- Adrian Klaver adrian.klaver at aklaver.com From devzero at web.de Fri Mar 31 07:33:33 2017 From: devzero at web.de (devzero at web.de) Date: Fri, 31 Mar 2017 13:33:33 +0200 Subject: [Borgbackup] borg info shows 615GB but du -s -h only shows 574GB Message-ID: can someone explain the difference ? why dos borg info show 615.67GB for all archives deduplicated size when du shows only 574GB ? regards roland [root at backupvm2 borg-repos]# /backup/bin/borg-1.0.10 info /iscsi/lun1/borg-repos/myhost::archive-2017-03-30_0652 Name: archive-2017-03-30_0652 Fingerprint: a2a87dff61fd9cd3a4696c13f95ace84c9158de4478bc6d683cb162c342fb8ff Hostname: backupvm2 Username: root Time (start): Thu, 2017-03-30 08:54:41 Time (end): Thu, 2017-03-30 09:02:09 Command line: /backup/bin/borg create --filter=AME --info --list --stats --numeric-owner --compression lz4 /iscsi/lun1/borg-repos/myhost::archive-2017-03-30_0652 . Number of files: 302987 Original size Compressed size Deduplicated size This archive: 1.00 TB 671.73 GB 14.28 MB All archives: 49.58 TB 30.28 TB 615.67 GB Unique chunks Total chunks Chunk index: 515622 29084818 [root at backupvm2 borg-repos]# du -s -h myhost/ 574G myhost/ From public at enkore.de Fri Mar 31 07:49:54 2017 From: public at enkore.de (Marian Beermann) Date: Fri, 31 Mar 2017 13:49:54 +0200 Subject: [Borgbackup] borg info shows 615GB but du -s -h only shows 574GB In-Reply-To: References: Message-ID: <715ce88b-c339-570b-daf7-18b03745e763@enkore.de> Hi Roland, this is likely IEC (base 1024) vs SI (base 1000) prefixes. See http://borgbackup.readthedocs.io/en/stable/usage.html#units Cheers, Marian On 31.03.2017 13:33, devzero at web.de wrote: > can someone explain the difference ? > > why dos borg info show 615.67GB for all archives deduplicated size when du shows only 574GB ? > > regards > roland > > > [root at backupvm2 borg-repos]# /backup/bin/borg-1.0.10 info /iscsi/lun1/borg-repos/myhost::archive-2017-03-30_0652 > Name: archive-2017-03-30_0652 > Fingerprint: a2a87dff61fd9cd3a4696c13f95ace84c9158de4478bc6d683cb162c342fb8ff > Hostname: backupvm2 > Username: root > Time (start): Thu, 2017-03-30 08:54:41 > Time (end): Thu, 2017-03-30 09:02:09 > Command line: /backup/bin/borg create --filter=AME --info --list --stats --numeric-owner --compression lz4 /iscsi/lun1/borg-repos/myhost::archive-2017-03-30_0652 . > Number of files: 302987 > > Original size Compressed size Deduplicated size > This archive: 1.00 TB 671.73 GB 14.28 MB > All archives: 49.58 TB 30.28 TB 615.67 GB > > Unique chunks Total chunks > Chunk index: 515622 29084818 > > [root at backupvm2 borg-repos]# du -s -h myhost/ > 574G myhost/ > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From devzero at web.de Fri Mar 31 08:25:59 2017 From: devzero at web.de (devzero at web.de) Date: Fri, 31 Mar 2017 14:25:59 +0200 Subject: [Borgbackup] borg info shows 615GB but du -s -h only shows 574GB In-Reply-To: <715ce88b-c339-570b-daf7-18b03745e763@enkore.de> References: , <715ce88b-c339-570b-daf7-18b03745e763@enkore.de> Message-ID: oh, indeed. thanks for the hint. but why does borg use SI base when standard linux tools like du or ls use IEC ? regards roland > Gesendet: Freitag, 31. M?rz 2017 um 13:49 Uhr > Von: "Marian Beermann" > An: borgbackup at python.org > Betreff: Re: [Borgbackup] borg info shows 615GB but du -s -h only shows 574GB > > Hi Roland, > > this is likely IEC (base 1024) vs SI (base 1000) prefixes. > > See http://borgbackup.readthedocs.io/en/stable/usage.html#units > > Cheers, Marian > > On 31.03.2017 13:33, devzero at web.de wrote: > > can someone explain the difference ? > > > > why dos borg info show 615.67GB for all archives deduplicated size when du shows only 574GB ? > > > > regards > > roland > > > > > > [root at backupvm2 borg-repos]# /backup/bin/borg-1.0.10 info /iscsi/lun1/borg-repos/myhost::archive-2017-03-30_0652 > > Name: archive-2017-03-30_0652 > > Fingerprint: a2a87dff61fd9cd3a4696c13f95ace84c9158de4478bc6d683cb162c342fb8ff > > Hostname: backupvm2 > > Username: root > > Time (start): Thu, 2017-03-30 08:54:41 > > Time (end): Thu, 2017-03-30 09:02:09 > > Command line: /backup/bin/borg create --filter=AME --info --list --stats --numeric-owner --compression lz4 /iscsi/lun1/borg-repos/myhost::archive-2017-03-30_0652 . > > Number of files: 302987 > > > > Original size Compressed size Deduplicated size > > This archive: 1.00 TB 671.73 GB 14.28 MB > > All archives: 49.58 TB 30.28 TB 615.67 GB > > > > Unique chunks Total chunks > > Chunk index: 515622 29084818 > > > > [root at backupvm2 borg-repos]# du -s -h myhost/ > > 574G myhost/ > > _______________________________________________ > > Borgbackup mailing list > > Borgbackup at python.org > > https://mail.python.org/mailman/listinfo/borgbackup > > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup >