How to make Python run as fast (or faster) than Julia

Rick Johnson rantingrickjohnson at gmail.com
Sun Feb 25 22:26:12 EST 2018


On Friday, February 23, 2018 at 8:48:55 PM UTC-6, Steven D'Aprano wrote:
[...]
> Take the Fibonacci double-recursion benchmark. Okay, it
> tests how well your language does at making millions of
> function calls. Why?

Because making "millons of function calls" is what software
*DOES*!

Granted, and as others have testified in this thread, i
myself have almost never implemented an algorithm like that
monstrosity of recursion that is implemented in fib(), but
whether we use recursive functions are not, a high number of
functions calls is inevitable in our software. And if you
don't believe me, then use sys.setrace(myFunc) to trace all
function calls to stdout, and you shall be enlightened[1].

> How often do you make millions of function calls?

All the time! Of course, if the extent of your code base is
a single script containing the source `print 'Hello World'`,
perhaps your experience may vary from others here.

> For most application code, executing the function is far
> more costly than the overhead of calling it,

What? Your functions are only called _once_ during the
lifetime of your code execution? Enough with the leg
pulling.

> and the call overhead is dwarfed by the rest of the
> application.

Of course, because it's calling _other_ functions.

> For many, many applications, the *entire* program run could
> take orders of magnitude fewer function calls than a single
> call to fib(38).

Hmm, let's see:

    def main():
        print 'hello world'
    main()

By golly, he's right! O_O


--
[1] But, in the meantime, you may want to make yourself a
taco; go play a round of golf; and possibly pay a visit to
national monument (or three!), while all that crap is printing
to stdout.



More information about the Python-list mailing list