Powersets of a list?
Alex Martelli
aleaxit at yahoo.com
Mon May 28 11:19:30 EDT 2001
"David C. Ullrich" <ullrich at math.okstate.edu> wrote in message
news:3b125c53.441935 at nntp.sprynet.com...
...
> >powerset_Alex(11.48) 32768:
> >powerset_AlexClass(12.94) 32768:
...
> The funny thing about this is that when I offered
> mine I said I imagined it was probably the slowest
> possible solution - wasn't being ironic, I figured
> it probably was. I was wrong.
Funny how a programmer's intuition may fail at times
on performance issues (though your was on-target
about the too-many-shifts costs in my approach:-).
> I think that comparing AlexClass with the others
> is like testing the relative speed of range()
> versus xrange(); of course xrange will be faster
> because it hasn't actually constructed the entire
> list in the sense that range has. (And of course
> the AlexClass version is the one you'd want in some
> situations, for just that reason.)
Please note (I think it's clear in the code I posted)
that I'm forcing the "virtual sequence" as embodied
in AlexClass to be "spelled out" in its entirety as
I measure its performance. Look at the lines actually
being timed in function timetest:
start=time.clock()
rs = fo(seq)
lrs = [x for x in rs]
stend=time.clock()
the list comprehension 'copying' rs into lrs is there
for this purpose (as well as to allow comparison of
results for correctness in the following lines:
lrs.sort()
assert base is None or base == lrs
but I would have taken the list-comprehension out
of the timed-code if I had wanted to make the class
approach look better:-). And indeed, as evidenced
by the couple of output lines quoted above, AlexClass
IS slower than Alex when measured in this way.
It MAY come in handy in memory-constrained situations,
given that the faster approaches seem hard-ish to turn
into iterators without some kind of 'generator'
underlying mechanism (users of stackless must be
having lots of fun right now:-). I suspect that
some minor optimizations to it are feasible. E.g.,
the list of O(N) one-bit bitmasks, taking up O(N*N)
space, could be precomputed and stored at __init__
time rather than re-built at each __getitem__ call...
it may still be a reasonable space cost to pay
compared with the O(N*2**N) space needed all at
once by faster approaches. Or, if one has gmpy
around, bit-testing might be faster with it...
Alex
More information about the Python-list
mailing list