[Python-ideas] Optimizing list.sort() by checking type in advance

Elliot Gorokhovsky elliot.gorokhovsky at gmail.com
Mon Oct 10 23:29:28 EDT 2016


Thanks for looking at this!

That's why I spent months of my life (overall) devising a sequence of
sorting algorithms for Python that reduced the number of comparisons needed.


Yes, that's why I think this is so cool: for a couple dozen lines of code,
we can get (at least for some cases, according to my questionable
benchmarks) the kinds of massive improvements you had to use actual
computer science to achieve (as opposed to mere hackery).


Note that when Python's current sort was adopted in Java, they still kept a
quicksort variant for "unboxed" builtin types.  The adaptive merge sort
incurs many overheads that often cost more than they save unless
comparisons are in fact very expensive compared to the cost of pointer
copying (and in Java comparison of unboxed types is cheap).  Indeed, for
native numeric types, where comparison is dirt cheap, quicksort generally
runs faster than mergesort despite that the former does _more_ comparisons
(because mergesort does so much more pointer-copying).


Ya, I think this may be a good approach for floats: if the list is all
floats, just copy all the floats into a seperate array, use the standard
library quicksort, and then construct a sorted PyObject* array. Like maybe
set up a struct { PyObject* payload, float key } type of deal. This
wouldn't work for strings (unicode is scary), and probably not for ints
(one would have to check that all the ints are within C long bounds).
Though on the other hand perhaps this would be too expensive?

I had considered something "like this" for Python 2, but didn't pursue it
because comparison was defined between virtually any two types (34 < [1],
etc), and people were careless about that (both by design and by
accident).  In Python 3, comparison "blows up" for absurdly mixed types, so
specializing for homogeneously-typed lists is a more promising idea on the
face of it.

The comparisons needed to determine _whether_ a list's objects have a
common type is just len(list)-1 C-level pointer comparisons, and so goes
fast.  So I expect that, when it applies, this would speed even sorting an
already-ordered list with at least 2 elements.

That's what my crude benchmarks indicate... when I applied my sort to a
list of 1e7 ints with a float tacked on the end, my sort actually ended up
being a bit faster over several trials (which I attribute to
PyObject_RichCompare == Py_True being faster than PyObject_RichCompareBool
== 1, apologies for any typos in that code).


For a mixed-type list with at least 2 elements, it will always be pure
loss.  But (a) I expect such lists are uncommon (and especially uncommon in
Python 3); and (b) a one-time scan doing C-level pointer comparisons until
finding a mismatched type is bound to be a relatively tiny cost compared to
the expense of all the "rich comparisons" that follow.

So +1 from me on pursuing this.

Elliot, please:

- Keep this on python-ideas.  python-dev is for current issues in Python
development, not for speculating about changes.

- Open an issue on the tracker:  https://bugs.python.org/


OK


- At least browse the info for developers:
https://docs.python.org/devguide/


Ya, I'm working on setting this up as a patch in the hg repo as opposed to
an extension module to make benchmarking cleaner/more sane.


- Don't overlook Lib/test/sortperf.py.  As is, it should be a good test of
what your approach so far _doesn't_ help, since it sorts only lists of
floats (& I don't think you're special-casing them).  If the timing results
it reports aren't significantly hurt (and I expect they won't be), then add
specialization for floats too and gloat about the speedup :-)

Ya, I mean they aren't special-cased, but homogenous lists of floats still
fit in the tp->rich_compare case, which still bypasses the expensive
PyObject_RichCompare. I'll guess I'll see when I implement this as a patch
and can run it on sortperf.py.


- I expect tuples will also be worth specializing (complex sort keys are
often implemented as tuples).


I'm not sure what you mean here... I'm looking at the types of lo.keys, not
of saved_ob_item (I think I said that earlier in this thread by mistake
actually). So if someone is passing tuples and using itemgetter to extract
ints or strings or whatever, the current code will work fine; lo.keys will
be scalar types. Unless I misunderstand you here. I mean, when would
lo.keys actually be tuples?


Nice start! :-)

Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20161011/7a634c79/attachment.html>


More information about the Python-ideas mailing list