How to make Python run as fast (or faster) than Julia

Chris Angelico rosuav at gmail.com
Fri Feb 23 13:56:25 EST 2018


On Sat, Feb 24, 2018 at 5:43 AM, Python <python at bladeshadow.org> wrote:
> On Sat, Feb 24, 2018 at 03:42:43AM +1100, Chris Angelico wrote:
>> >> If that were so, then the comparison should use the fastest *Python*
>> >> implementation.
>> >
>> > Doing that would completely fail to accomplish the task of comparing
>> > the performance of recursive function calls in the two languages,
>> > which is what the benchmark was designed to do.  So, no actually, it
>> > shouldn't.
>> >
>>
>> Where does the author say that the benchmark is designed to compare
>> recursion?
>
> Chris, you're a smart guy...  Are you suggesting that the reason the
> fibonacci sequence was selected as a benchmark is because it's such an
> amazingly useful problem that it, in and of itself, warrants having
> such a behchmark?  Or, do you think the reason it makes sense to have
> such a benchmark is because, like the reason it's presented in pretty
> much every CS program ever, it presents an opportunity to consider a
> particular class of problems and different techniques for solving
> those problems, and the performance characteristics of those
> solutions?
>
>
> But, to answer your question more directly, here:
>
>   https://julialang.org/benchmarks/
>
>     "It is important to note that the benchmark codes are not written
>     for absolute maximal performance (the fastest code to compute
>     recursion_fibonacci(20) is the constant literal 6765). Instead,
>     the benchmarks are written to test the performance of identical
>     algorithms and code patterns implemented in each language. For
>     example, the Fibonacci benchmarks all use the same (inefficient)
>     doubly-recursive algorithm..."
>
> Satisfied?

No, not satisfied. Everything you've said would still be satisfied if
all versions of the benchmark used the same non-recursive algorithm.
There's nothing here that says it's testing recursion, just that (for
consistency) it's testing the same algorithm. There is no reason to
specifically test *recursion*, unless that actually aligns with what
you're doing. For instance, when Google Chrome rolled out its new V8
JavaScript engine, it was specifically optimized for recursion,
because many Google apps used recursion heavily. If you're
implementing a new LISP interpreter, you should probably optimize for
recursion, because most LISP code is going to be written recursively.
But otherwise, there's no particular reason to stick to recursion.

> So, by changing the algorithm, the article defeats the purpose of the
> benchmark.  It makes some fine points about code optimization, but it
> completely fails at its stated purpose (to make the benchmarks more
> fair).  The comparisons it makes are substantially less valid than the
> ones made by the Julia benchmarks, on account of optimizing only the
> algorithm used by Python, and not testing with a similarly optimized
> algorithm in Julia, but rather using its results for the intentionally
> unoptimized algorithm those benchmarks used.  Even if testing
> optimized code is the point, as the article claims, it utterly fails
> to do that.  Bad science.
>

I won't dispute that part. The correct way to do this would be to
optimize both sides fairly - either not at all, or equivalently (for
some definition of equivalence). But switching both sides to an
unoptimized iterative algorithm would be perfectly fair. Recursion is
NOT essential to the benchmark.

ChrisA



More information about the Python-list mailing list