python implementation of a new integer encoding algorithm.

Ian Kelly ian.g.kelly at gmail.com
Thu Feb 19 13:04:10 EST 2015


On Thu, Feb 19, 2015 at 8:45 AM,  <janhein.vanderburg at gmail.com> wrote:
> On Wednesday, February 18, 2015 at 11:20:12 PM UTC+1, Dave Angel wrote:
>> I'm not necessarily doubting it, just challenging you to provide a data
>> sample that actually shows it.  And of course, I'm not claiming that
>> 7bit is in any way optimal.  You cannot define optimal without first
>> defining the distribution.
>
> Weird results.
> For a character size 2 the growth processes are shown below.
> I listed the decimal representations, the difficult representation, a stop bit encoding, and the number of characters they differ in length:
> 0:  00                          00                              0
> 1:  01                          01                              0
> 2:  10, 00                      10, 00                          0
> 3:  10, 01                      10, 01                          0
> 4:  10, 10                      11, 00                          0
> 5:  10, 11                      11, 01                          0
> 6:  11, 00.00                   11, 10, 00                      0
> 7:  11, 00.01                   11, 10, 01                      0
> 8:  11, 00.10                   11, 11, 00                      0
> 9:  11, 00.11                   11, 11, 01                      0
> 10: 11, 01.00                   11, 11, 10, 00                  1
> 11: 11, 01.01                   11, 11, 10, 01                  1
> 12: 11, 01.10                   11, 11, 11, 00                  1
> 13: 11, 01.11                   11, 11, 11, 01                  1
> 14: 11, 10.00, 00               11, 11, 11, 10, 00              1
> 15: 11, 10.00, 01               11, 11, 11, 10, 01              1
> 16: 11, 10.00, 10               11, 11, 11, 11, 00              1
> 17: 11, 10.00, 11               11, 11, 11, 11, 01              1
> 18: 11, 10.01, 00.00            11, 11, 11, 11, 10, 00          1
> 19: 11, 10.01, 00.01            11, 11, 11, 11, 10, 01          1
> 20: 11, 10.01, 00.10            11, 11, 11, 11, 11, 00          1
> 21: 11, 10.01, 00.11            11, 11, 11, 11, 11, 01          1
> 22: 11, 10.01, 01.00            11, 11, 11, 11, 11, 10, 00      2
> 23: 11, 10.01, 01.01            11, 11, 11, 11, 11, 10, 01      2
> 24: 11, 10.01, 01.10            11, 11, 11, 11, 11, 11, 00      2
> 25: 11, 10.01, 01.11            11, 11, 11, 11, 11, 11, 01      2
> 26: 11, 10.01, 10.00            11, 11, 11, 11, 11, 11, 10, 00  3
>
> I didn't take the time to prove it mathematically, but these results suggest to me that the complicated encoding beats the stop bit encoding.

That stop-bit variant looks extremely inefficient (and wrong) to me.
First, 2 bits per group is probably a bad choice for a stop-bit
encoding. It saves some space for very small integers, but it won't
scale well at all. Fully half of the bits are stop bits! Secondly, I
don't understand why the leading groups are all 11s and only the later
groups introduce variability. In fact, that's practically a unary
encoding with just a small amount of binary at the end. This is what I
would expect a 2-bit stop-bit encoding to look like:

0: 00
1: 01
2: 11, 00
3: 11, 01
4: 11, 10, 00
5: 11, 10, 01
6: 11, 11, 00
7: 11, 11, 01
8: 11, 10, 10, 00
9: 11, 10, 10, 01
10: 11, 10, 11, 00
11: 11, 10, 11, 01
12: 11, 11, 10, 00
13: 11, 11, 10, 01
14: 11, 11, 11, 00
15: 11, 11, 11, 01
16: 11, 10, 10, 10, 00
17: 11, 10, 10, 10, 01
18: 11, 10, 10, 11, 00
19: 11, 10, 10, 11, 01
20: 11, 10, 11, 10, 00
21: 11, 10, 11, 10, 01
22: 11, 10, 11, 11, 00
23: 11, 10, 11, 11, 01
24: 11, 11, 10, 10, 00
25: 11, 11, 10, 10, 01
26: 11, 11, 10, 11, 00
27: 11, 11, 10, 11, 01
28: 11, 11, 11, 10, 00
29: 11, 11, 11, 10, 01
30: 11, 11, 11, 11, 00
31: 11, 11, 11, 11, 01
etc.

Notice that the size grows as O(log n), not O(n) as above. Notice also
that the only values here for which this saves space over the 7-bit
version are 0-7. Unless you expect those values to be very common, the
7-bit encoding that needs only one byte all the way up to 127 makes a
lot of sense.

There's also an optimization that can be added here if we wish to
inject a bit of cleverness. Notice that all values with more than one
group start with 11, never 10. We can borrow a trick from IEEE
floating point and make the leading 1 bit of the mantissa implicit for
all values greater than 3 (we can't do it for 2 and 3 because then we
couldn't distinguish them from 0 and 1). Applying this optimization
removes one full group from the representation of all values greater
than 3, which appears to make the stop-bit representation as short as
or shorter than the "difficult" one for all the values that have been
enumerated above.



More information about the Python-list mailing list