Optimizing list processing

Steven D'Aprano steve+comp.lang.python at pearwood.info
Wed Dec 11 18:54:22 EST 2013


I have some code which produces a list from an iterable using at least 
one temporary list, using a Decorate-Sort-Undecorate idiom. The algorithm 
looks something like this (simplified):

table = sorted([(x, i) for i,x in enumerate(iterable)])
table = [i for x,i in table]

The problem here is that for large iterables, say 10 million items or so, 
this is *painfully* slow, as my system has to page memory like mad to fit 
two large lists into memory at once. So I came up with an in-place 
version that saves (approximately) two-thirds of the memory needed.

table = [(x, i) for i,x in enumerate(iterable)]
table.sort()
for x, i in table:
    table[i] = x

For giant iterables (ten million items), this version is a big 
improvement, about three times faster than the list comp version. Since 
we're talking about the difference between 4 seconds and 12 seconds (plus 
an additional 40-80 seconds of general slow-down as the computer pages 
memory into and out of virtual memory), this is a good, solid 
optimization.

Except that for more reasonably sized iterables, it's a pessimization. 
With one million items, the ratio is the other way around: the list comp 
version is 2-3 times faster than the in-place version. For smaller lists, 
the ratio varies, but the list comp version is typically around twice as 
fast. A good example of trading memory for time.

So, ideally I'd like to write my code like this:


table = [(x, i) for i,x in enumerate(iterable)]
table.sort()
if len(table) < ?????:
    table = [i for x,i in table]
else:
    for x, i in table:
        table[i] = x

where ????? no doubt will depend on how much memory is available in one 
contiguous chunk.

Is there any way to determine which branch I should run, apart from hard-
coding some arbitrary and constant cut-off value?



-- 
Steven



More information about the Python-list mailing list