[Borgbackup] how to back up 300 million files

Marian Beermann public at enkore.de
Thu May 4 07:12:42 EDT 2017


On 04.05.2017 13:08, Maurice Libes wrote:
> Le 04/05/2017 à 12:59, Marian Beermann a écrit :
>> On 04.05.2017 12:47, Maurice Libes wrote:
>>> other answer/question from another point of view of a neophyte:
>>>
>>> Is borg an appropriate solution in this case of very small files (12kb)
>>> , since borg will never split the files into chunks?
>>> so don't we lose the benefit of deduplication ?
>>> or am I wrong?
>>> I don't remember what is the limit for a file to be split into chunks
>>>
>> Small files won't be split into chunks, but they will still be
>> deduplicated.
> If they are not split, so you talk about deduplication at the file
> "level", if I understand?

Yes

> like another incremental backup  which do not backup again the same
> files which are already presents?

Depends - simple incremental backup (e.g. rsnapshot) yes,
but diff-based incremental backups would only store a diff.

> (let's say it is not what I call deduplication,
> for me deduplication is a process at the block file level, but may be it
> is a question of terms)
> 
> ML

The two cases are the same if a file consists only of one "block" ;)

Cheers, Marian



More information about the Borgbackup mailing list