Surprising (for me) benchmark results...

Alex Martelli aleaxit at yahoo.com
Wed May 2 10:57:15 EDT 2001


"Courageous" <jkraska1 at san.rr.com> wrote in message
news:d870ftoe4eue0lkb1gh70qfb5lq71hko3o at 4ax.com...
    [snip]
> Nothing, really. I keep pretending Python doesn't do that
> silly constant allocation thing. :-)
>
> >> If we weren't supposed to be talking about list.append(),
> >> sorry. :-)
> >
> >I _am_ taking about .append().  From past experiments, I have
> >noticed that using a guaranteed-amortized-O(1) container does
> >not shew _practical_ advantages...
>
> Does not show practical advantages with regards to what?

Doesn't show, or shew (both excellent English verbs:-), practical
(measurable) advantages wrt Python's current "silly constant
allocation thing".  I felt so *SURE* that changing the allocation
strategy would make a difference... _measurement_ disabused me
of that ill-founded certainty.  (Fortunately, I *know* that
intuition about performance issues is very often misplaced,
so I'm not really surprised when measuring shows my intuition
was wrong -- I _may_ occasionally be surprised when profiling
shows the time sink is exactly what I would have guessed...:-).


> >reallocation still costs:-) -- but that, of course, is not
> >a very relevant point regarding big-O issues.
>
> I'm sure that one of the reasons that this hasn't been addressed
> is that it does matter for routine values of n.

I measured on *large* values of N -- I don't recall the exact
numbers, but they didn't seem to matter much: allocating by
the current strategy OR by geometrical growth (say N'=100+1.5N,
when the current allocation of N items must grow to a new size
N') produced very similar results, preallocating the whole
caboodle in advance always gave substantial advantage.

That was on Windows platforms, and for Python 2.0, by the way.
Maybe performance behaves otherwise with different allocators.


Alex






More information about the Python-list mailing list