Incremental compression

Dan Stromberg drsalists at gmail.com
Fri Feb 9 20:52:33 EST 2018


Perhaps:

import lzma
lzc = lzma.LZMACompressor()
out1 = lzc.compress(b"Some data\n")
out2 = lzc.compress(b"Another piece of data\n")
out3 = lzc.compress(b"Even more data\n")
out4 = lzc.flush()
# Concatenate all the partial results:
result = b"".join([out1, out2, out3, out4])

?

lzma compresses harder than bzip2, but it's probably slower too.

On Fri, Feb 9, 2018 at 5:36 PM, Steven D'Aprano
<steve+comp.lang.python at pearwood.info> wrote:
> I want to compress a sequence of bytes one byte at a time. (I am already
> processing the bytes one byte at a time, for other reasons.) I don't
> particularly care *which* compression method is used, and in fact I'm not
> even interested in the compressed data itself, only its length. So I'm
> looking for something similar to this:
>
> count = 0
> for b in stream:
>     process(b)
>     count += incremental_compressor.compressor(b)
>
>
>
> or some variation. Apart from bzip2, do I have any other options in the
> std lib?
>
> https://docs.python.org/3/library/bz2.html#incremental-de-compression
>
>
>
> --
> Steve
>
> --
> https://mail.python.org/mailman/listinfo/python-list



More information about the Python-list mailing list