How to make Python run as fast (or faster) than Julia

Python python at bladeshadow.org
Fri Feb 23 13:43:06 EST 2018


On Sat, Feb 24, 2018 at 03:42:43AM +1100, Chris Angelico wrote:
> >> If that were so, then the comparison should use the fastest *Python*
> >> implementation.
> >
> > Doing that would completely fail to accomplish the task of comparing
> > the performance of recursive function calls in the two languages,
> > which is what the benchmark was designed to do.  So, no actually, it
> > shouldn't.
> >
> 
> Where does the author say that the benchmark is designed to compare
> recursion? 

Chris, you're a smart guy...  Are you suggesting that the reason the
fibonacci sequence was selected as a benchmark is because it's such an
amazingly useful problem that it, in and of itself, warrants having
such a behchmark?  Or, do you think the reason it makes sense to have
such a benchmark is because, like the reason it's presented in pretty
much every CS program ever, it presents an opportunity to consider a
particular class of problems and different techniques for solving
those problems, and the performance characteristics of those
solutions?


But, to answer your question more directly, here:

  https://julialang.org/benchmarks/

    "It is important to note that the benchmark codes are not written
    for absolute maximal performance (the fastest code to compute
    recursion_fibonacci(20) is the constant literal 6765). Instead,
    the benchmarks are written to test the performance of identical
    algorithms and code patterns implemented in each language. For
    example, the Fibonacci benchmarks all use the same (inefficient)
    doubly-recursive algorithm..."

Satisfied?

> Recursion is sometimes a good way to describe an algorithm, but
> rarely a good way to implement it at a low level.

I'm well aware, and said the equivalent elsewhere.  As I also said
elsewhere, I never claimed it was a particularly useful benchmark.  It
is, nevertheless, designed to accomplish the stated goal, and does
exactly that.  You can decide for yourself how useful that goal is,
but you can't really argue that it doesn't serve that purpose.

So, by changing the algorithm, the article defeats the purpose of the
benchmark.  It makes some fine points about code optimization, but it
completely fails at its stated purpose (to make the benchmarks more
fair).  The comparisons it makes are substantially less valid than the
ones made by the Julia benchmarks, on account of optimizing only the
algorithm used by Python, and not testing with a similarly optimized
algorithm in Julia, but rather using its results for the intentionally
unoptimized algorithm those benchmarks used.  Even if testing
optimized code is the point, as the article claims, it utterly fails
to do that.  Bad science.




More information about the Python-list mailing list