[issue23515] Bad logic in timsort's merge_collapse

Tim Peters report at bugs.python.org
Wed Feb 25 18:09:36 CET 2015


Tim Peters added the comment:

Since it's impossible to trigger the error on any current machine anyway (no machine has enough memory), increasing the size of the stack would be absurd.  If you read the paper, they note that this is what the Java folks first did (they changed this part of timsort in a way that _did_ make it possible to provoke the stack overflow on current hardware).  And they got it wrong, not increasing it enough.  Without the _intended_ invariant, it's difficult to figure out how much is enough.

It's not worth measuring performance.  If there are a total of R runs in the input array, a total of R-1 merges will be performed (with or without the change).  As explained in listsort.txt, per-merge overheads don't matter at all unless they're outrageous, because there are at most ceiling(len(array)/32) runs.  So the total number of merges is a small fraction of the number of array elements (call that `N`), in an N*log(N) process.  Adding a few native C integer comparisons per merge (as the change essentially does) is trivial.

It may be possible to contrive inputs which run significantly slower (or faster!) with the change, because the order in which runs are merged may change.  But then you'd just be chasing patterns of HW and OS layers-of-caching behavior specific to the box the test is running on.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue23515>
_______________________________________


More information about the Python-bugs-list mailing list