How to make Python run as fast (or faster) than Julia

Python python at bladeshadow.org
Thu Feb 22 22:31:28 EST 2018


On Fri, Feb 23, 2018 at 01:27:53AM +0000, Steven D'Aprano wrote:
> On Thu, 22 Feb 2018 17:53:30 +0000, bartc wrote:
> > As I said, people keep missing the point. The fact this uses a grossly
> > inefficient way of calculating Fibonacci seems to blind them to any
> > other considerations.
> > 
> > The actual result is irrelevant, so long as its correct. The important
> > thing is those 50 million calls.
> 
> Why do you care about the 50 million calls? That's crazy -- the important 
> thing is *calculating the Fibonacci numbers as efficiently as possible*.

If you are writing practical programs, that's true.  But the Julia
benchmarks are not practical programs; they are designed to compare
the performance of various language features across a range of
languages.  

How is it a fair comparison to compare how Julia runs an algorithm
that is not optimal in pretty much any language, to a completely
different one with a much improved order of complexity, optimized
specifically for Python?

The article author even points out that the Fibonacci sequence problem
can be solved using the same technique as he used for his Python
solution, but makes no effort to benchmark it and then goes on to
declare Python the winner anyway.  That's just bad science.  Many of
the comments after the article repeatedly echo the same fundamental
flaw that bartc raised:  The benchmark isn't intended to determine the
fastest way to calculate Fibonacci numbers; it's meant to measure the
language's efficiency solving this class of problems in this way.

> I don't give a flying fox about how fast the compiler can do those 48 
> million calls

And if you needed to write a program that HAD TO perform those 48
million calls, wouldn't you care then?  Don't forget that in the real
world, such reasons can include things like, "I don't know any other
way to solve it and I need it done RFN," or, "My boss is a moron and
insisted I do it this way..." [If you've never seen the latter,
consider yourself lucky, but I sadly have.]

> When comparing two compilers, you are ALWAYS comparing two different 
> programs.

Such benchmarks still compare the efficiency with which the compiler
turns the same algorithm you wrote in its provided syntax (ultimately)
into executed instructions, and that's still a fair and useful
comparison of the two languages, for solutions to problems of that
particular class/structure/whatever.

Knowing that a particular problem has a more efficient solution in the
language you're using isn't always feasible; no one's knowledge of
algorithms is exhaustive, and most of us have to care about deadlines
way more than whether or not our implementations are exactly the most
efficient ones possible.  The average programmer typically has very
little control or even awareness of what code the compiler generates.
You can say that makes them shitty programmers but guess what?  The
world is full of shitty programmers.

In that world, the real world, if you forbid these types of
comparisons for being inherently unfair, simply because the compiler
will obviously not generate *precisely* the same code, then you can
not with any measure of sanity determine which language is the better
tool to solve your problem of the day, given the tools you have at
your disposal (including your own knowledge of algorithms).  These
types of benchmarks are useful for discovering what a given language
is good at and what it is not.  That information is useful to either
allow you to choose the more efficient language for the solution
you're going to implement, or perhaps even as a hint that the language
you want to use may have a more efficient way to achieve the result
you need.





More information about the Python-list mailing list