Python Front-end to GCC

Oscar Benjamin oscar.j.benjamin at gmail.com
Mon Oct 21 05:55:10 EDT 2013


On 21 October 2013 08:46, Steven D'Aprano <steve at pearwood.info> wrote:
> On Sun, 20 Oct 2013 20:35:03 -0700, Mark Janssen wrote:
>
> [Attribution to the original post has been lost]
>>> Is a jit implementation of a language (not just python) better than
>>> traditional ahead of time compilation.
>>
>> Not at all.  The value of jit compilation, I believe, is purely for the
>> dynamic functionality that it allows.  AOT compilation will never allow
>> that, but in return you get massive performance and runtime-size gains
>
> On the contrary, you have that backwards. An optimizing JIT compiler can
> often produce much more efficient, heavily optimized code than a static
> AOT compiler, and at the very least they can optimize different things
> than a static compiler can. This is why very few people think that, in
> the long run, Nuitka can be as fast as PyPy, and why PyPy's ultimate aim
> to be "faster than C" is not moonbeams:

That may be true but both the examples below are spurious at best. A
decent AOT compiler would reduce both programs to the NULL program as
noted by haypo:
http://morepypy.blogspot.co.uk/2011/02/pypy-faster-than-c-on-carefully-crafted.html?showComment=1297205903746#c2530451800553246683

> http://morepypy.blogspot.com.au/2011/02/pypy-faster-than-c-on-carefully-crafted.html
>
> http://morepypy.blogspot.com.au/2011/08/pypy-is-faster-than-c-again-string.html

I just modified the add() example so that none of the operations can
be entirely optimised away:

def main():
    i = 0
    a = 0.0
    b = 0.0
    while i < 1000000000:
        a += 1.0
        b += add(a, a)
        i += 1
    print(b)

Similarly for the C version:

#include <stdio.h>

double add(double a, double b);

int main()
{
  int i = 0;
  double a = 0;
  double b = 0;
  while (i < 1000000000) {
    a += 1.0;
    b += add(a, a);
    i++;
  }
  printf("%f", b);
}

My timings:

$ gcc -O3 x.c y.c
$ time ./a.exe
1000000000134218000.000000
real    0m5.609s
user    0m0.015s
sys     0m0.000s
$ time pypy y.py
1.00000000013e+18

real    0m9.891s
user    0m0.060s
sys     0m0.061s

So the pypy version takes twice as long to run this. That's impressive
but it's not "faster than C".

I also compared a script that uses intensive decimal computation run
under CPython 3.3 and PyPy 2.1 (pyver 2.7). This is essentially a
comparison between the C implementation of the decimal module and
pypy's jit'd optimisation of the pure Python module. CPython 3.3 takes
10 seconds and PyPy 2.1 takes 45 seconds. Again that's impressive (a
lot of work went into making the C implementation of the decimal
module as fast as it is) but it's not faster than C.

I don't mean to criticise PyPy. I've just tested it and I am impressed
and I think I'll definitely try and use it where possible. I do think
that some of the marketing there is misleading though.


Oscar



More information about the Python-list mailing list