list.clear() missing?!?

Dan Christensen jdc at uwo.ca
Wed Apr 12 22:00:10 EDT 2006


Raymond Hettinger <python at rcn.com> writes:

> Felipe Almeida Lessa writes:
> 
> > I love benchmarks, so as I was testing the options, I saw something very
> > strange:
> >
> > $ python2.4 -mtimeit 'x = range(100000); '
> > 100 loops, best of 3: 6.7 msec per loop
> > $ python2.4 -mtimeit 'x = range(100000); del x[:]'
> > 100 loops, best of 3: 6.35 msec per loop
> >
> > Why the first benchmark is the slowest? I don't get it... could someone
> > test this, too?
>
> It is an effect of the memory allocator and fragmentation.  The first
> builds up a list with increasingly larger sizes.  

I don't see what you mean by this.  There are many lists all of
the same size.  Do you mean some list internal to the memory
allocator? 

> It periodically
> cannot grow in-place because something is in the way (some other
> object) so it needs to move its current entries to another, larger
> block and grow from there.  In contrast, the other entries are reusing
> a the previously cleared out large block.
>
> Just for grins, replace the first with"
>   'x=None; x=range(100000)'
> The assignment to None frees the reference to the previous list and
> allows it to be cleared so that its space is immediately available to
> the new list being formed by range().

It's true that this runs at the same speed as the del variants on my
machine.  That's not too surprising to me, but I still don't
understand why the del variants are more than 5% faster than the first
version.

Once this is understood, is it something that could be optimized?
It's pretty common to rebind a variable to a new value, and if
this could be improved 5%, that would be cool.  But maybe it
wouldn't affect anything other than such micro benchmarks.

Dan



More information about the Python-list mailing list