From hakan at debian.org Sat Oct 1 09:38:25 2011 From: hakan at debian.org (Hakan Ardo) Date: Sat, 1 Oct 2011 09:38:25 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Hack to ensure that ll_arraycopy gets a proper effectinfo.write_descrs_arrays In-Reply-To: References: Message-ID: On Fri, Sep 30, 2011 at 11:17 PM, Maciej Fijalkowski wrote: > On Fri, Sep 30, 2011 at 6:15 PM, Hakan Ardo wrote: >> Hi, >> is there a better way to fix this? The same kind of issue might arise elsewhere? > > Make sure that raw_memcopy has the correct effect on analyzer? What effect would that be? Setting extraeffect=EF_RANDOM_EFFECTS as it can write anywhere? Can I then somehow give ll_arraycopy a more restrictive effectinfo? -- H?kan Ard? From andrew at aeracode.org Sat Oct 1 01:22:27 2011 From: andrew at aeracode.org (Andrew Godwin) Date: Sat, 01 Oct 2011 00:22:27 +0100 Subject: [pypy-dev] PyPy packaging help needed In-Reply-To: References: Message-ID: <4E864F33.70807@aeracode.org> On 30/09/11 22:03, Maciej Fijalkowski wrote: > On Fri, Sep 30, 2011 at 6:00 PM, Randall Leeds wrote: >> I've done a little bit of deb packaging before and would love a reason to be >> more involved in pypy. >> I'd be happy to get stuck into this. > > I guess what we have now is in > http://codespeak.net/svn/pypy/build/ubuntu/trunk/debian/ > > It's grossly outdated though. Drop in on #pypy on IRC if you need more info I have an updated repo at https://bitbucket.org/andrewgodwin/pypy-debian/src that builds packages directly from the nightlies - this results in a much, much quicker build process, but it's not a true debian package yet. I had a go at making it build from the source, but it takes so long to test, as it's around half an hour between writing something and seeing if it works, that I didn't get around to it. Andrew From fijall at gmail.com Sat Oct 1 13:27:19 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 1 Oct 2011 08:27:19 -0300 Subject: [pypy-dev] [pypy-commit] pypy default: Hack to ensure that ll_arraycopy gets a proper effectinfo.write_descrs_arrays In-Reply-To: References: Message-ID: On Sat, Oct 1, 2011 at 4:38 AM, Hakan Ardo wrote: > On Fri, Sep 30, 2011 at 11:17 PM, Maciej Fijalkowski wrote: >> On Fri, Sep 30, 2011 at 6:15 PM, Hakan Ardo wrote: >>> Hi, >>> is there a better way to fix this? The same kind of issue might arise elsewhere? >> >> Make sure that raw_memcopy has the correct effect on analyzer? > > What effect would that be? Setting extraeffect=EF_RANDOM_EFFECTS as it > can write anywhere? Can I then somehow give ll_arraycopy a more > restrictive effectinfo? Armin commited this on trunk: 78fddfb51114 > > -- > H?kan Ard? > From anto.cuni at gmail.com Sat Oct 1 13:37:48 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sat, 01 Oct 2011 13:37:48 +0200 Subject: [pypy-dev] pypy with virtualenv? In-Reply-To: References: Message-ID: <4E86FB8C.3060300@gmail.com> On 30/09/11 19:02, Armin Rigo wrote: > If it does, well, follow the recommendation. If it doesn't, then > likely, something is again broken in the interaction of virtualenv and > pypy. Antonio, do you know if virtualenv 1.6.4 is supposed to work > with pypy? I just tried and virtualenv 1.6.4 works fine with the lastest nightly. It's most probably a problem with fedora's packaging. ciao, Anto From techtonik at gmail.com Sun Oct 2 09:33:30 2011 From: techtonik at gmail.com (anatoly techtonik) Date: Sun, 2 Oct 2011 10:33:30 +0300 Subject: [pypy-dev] Online demo for jitviewer Message-ID: Hi, Please, CC. The other online demo of jitviewer described in PyPy blog [1] seems to be down. Is there any other site to see how it looks? 1. http://morepypy.blogspot.com/2011/08/visualization-of-jitted-code.html -- anatoly t. From hakan at debian.org Tue Oct 4 00:00:22 2011 From: hakan at debian.org (Hakan Ardo) Date: Tue, 4 Oct 2011 00:00:22 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Hack to ensure that ll_arraycopy gets a proper effectinfo.write_descrs_arrays In-Reply-To: References: Message-ID: On Sat, Oct 1, 2011 at 1:27 PM, Maciej Fijalkowski wrote: > On Sat, Oct 1, 2011 at 4:38 AM, Hakan Ardo wrote: >> On Fri, Sep 30, 2011 at 11:17 PM, Maciej Fijalkowski wrote: >>> On Fri, Sep 30, 2011 at 6:15 PM, Hakan Ardo wrote: >>>> Hi, >>>> is there a better way to fix this? The same kind of issue might arise elsewhere? >>> >>> Make sure that raw_memcopy has the correct effect on analyzer? >> >> What effect would that be? Setting extraeffect=EF_RANDOM_EFFECTS as it >> can write anywhere? Can I then somehow give ll_arraycopy a more >> restrictive effectinfo? > > Armin commited this on trunk: 78fddfb51114 Yes that improves the hack. However it still makes me concerned about any other (potential future) usages of raw_memcopy. Wont they have the same issue? How about we set extraeffect=EF_RANDOM_EFFECTS in the effectinfo of raw_memcopy and introduces a decorator that would allow us to lessen the effect inhertited by a function calling it. Something like: def raw_memcopy_effect_in_arraycopy(source, dest, source_start, dest_start, length): dest[dest_start] = source[source_start] @replace_inherited_effect_of(raw_memcopy, with=raw_memcopy_effect_in_arraycopy) def ll_arraycopy(source, dest, source_start, dest_start, length): ... raw_memcopy(...) -- H?kan Ard? From fijall at gmail.com Tue Oct 4 00:03:23 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 4 Oct 2011 00:03:23 +0200 Subject: [pypy-dev] [pypy-commit] pypy default: Hack to ensure that ll_arraycopy gets a proper effectinfo.write_descrs_arrays In-Reply-To: References: Message-ID: On Tue, Oct 4, 2011 at 12:00 AM, Hakan Ardo wrote: > On Sat, Oct 1, 2011 at 1:27 PM, Maciej Fijalkowski wrote: >> On Sat, Oct 1, 2011 at 4:38 AM, Hakan Ardo wrote: >>> On Fri, Sep 30, 2011 at 11:17 PM, Maciej Fijalkowski wrote: >>>> On Fri, Sep 30, 2011 at 6:15 PM, Hakan Ardo wrote: >>>>> Hi, >>>>> is there a better way to fix this? The same kind of issue might arise elsewhere? >>>> >>>> Make sure that raw_memcopy has the correct effect on analyzer? >>> >>> What effect would that be? Setting extraeffect=EF_RANDOM_EFFECTS as it >>> can write anywhere? Can I then somehow give ll_arraycopy a more >>> restrictive effectinfo? >> >> Armin commited this on trunk: 78fddfb51114 > > Yes that improves the hack. However it still makes me concerned about > any other (potential future) usages of raw_memcopy. Wont they have the > same issue? > > How about we set extraeffect=EF_RANDOM_EFFECTS in the effectinfo of > raw_memcopy and introduces a decorator that would allow us to lessen > the effect inhertited by a function calling it. Something like: > > ? ?def raw_memcopy_effect_in_arraycopy(source, dest, source_start, > dest_start, length): > ? ? ? ?dest[dest_start] = source[source_start] > > ? ?@replace_inherited_effect_of(raw_memcopy, > with=raw_memcopy_effect_in_arraycopy) > ? ?def ll_arraycopy(source, dest, source_start, dest_start, length): > ? ? ? ?... > ? ? ? ?raw_memcopy(...) > > > -- > H?kan Ard? > How many usages of raw_memcopy are there? I guess not very many From igor.katson at gmail.com Tue Oct 4 12:07:20 2011 From: igor.katson at gmail.com (Igor Katson) Date: Tue, 04 Oct 2011 14:07:20 +0400 Subject: [pypy-dev] PyPQ: a dbapi 2 PostgreSQL driver working with pypy Message-ID: <4E8ADAD8.5070701@gmail.com> Hi, pypy developers. check out pypq, a dbapi 2 PostgreSQL driver working with pypy http://pypi.python.org/pypi/pypq I've made it to test how will my existing django sites work with pypy I will be glad for any help. Feel free to contact me if you want to participate. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan at dlo.me Tue Oct 4 15:45:31 2011 From: dan at dlo.me (Dan Loewenherz) Date: Tue, 4 Oct 2011 06:45:31 -0700 Subject: [pypy-dev] PyPQ: a dbapi 2 PostgreSQL driver working with pypy In-Reply-To: <4E8ADAD8.5070701@gmail.com> References: <4E8ADAD8.5070701@gmail.com> Message-ID: Hi Igor, This is awesome. Quick questions: What is the feature parity with psycopg2? Are there any large components that aren't up to speed and need working on? Thanks! Dan mobile 786-201-1161 | web http://dlo.me/ | twitter @dwlz On Tue, Oct 4, 2011 at 3:07 AM, Igor Katson wrote: > Hi, pypy developers. > > check out pypq, a dbapi 2 PostgreSQL driver working with pypy > http://pypi.python.org/pypi/pypq > > I've made it to test how will my existing django sites work with pypy > > I will be glad for any help. > Feel free to contact me if you want to participate. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor.katson at gmail.com Tue Oct 4 16:51:58 2011 From: igor.katson at gmail.com (Igor Katson) Date: Tue, 04 Oct 2011 18:51:58 +0400 Subject: [pypy-dev] PyPQ: a dbapi 2 PostgreSQL driver working with pypy In-Reply-To: References: <4E8ADAD8.5070701@gmail.com> Message-ID: <4E8B1D8E.70101@gmail.com> Hi, Dan, before answering I'll describe the situation a bit. there was a question today, if I know about pg8000 or psycopg2ct. As for pg8000, pure python should be slower than ctypes, anyway, so I don't think these two should be compared. But at the time of making pypq, I did not know about psycopg2ct. A couple of weeks ago there was no mention of it on the pypy psycopg2 compatibility page, and yesterday (when I decided to make pypq) I did not find it while googling the topic. Though if I did know that, I wouldn't write my own implementation. Luckily, it took me only one day of coding, and therefore there are two key differences: - pypq is a lot simpler. You can look at the code of both to see that - pypq does not try to be psycopg2, only a couple of functions from there for django compatibility As soon as there is another project of this kind, I'm not sure if I'll go on with this. For production usage, pypq needs at least a complete test coverage, so DO NOT USE it in production, it is very alpha. About your questions, Dan: - this is not a replacement for psycopg2, as psycopg2ct is, but instead a dbapi2-compliant driver, which looks almost the same as psycopg2, cause they share the API. Making it a replacement was not a concern - i did not profile the code, just tested the results on my sites visually, and on cpython 2.7, I got roughly the same speeds as with psycopg2. It should be pretty fast, but if there is a bottleneck, the code is pretty simple to explore and fix. What I can think of now is processing very large result sets might be slow, cause for every cell there is a type cast going on, especially for complex types like datetime.timedelta. See datatypes.py for details. But still, the fact, that a working postgres driver for python could be written in a day, is awesome :) So as long as the code is simple, it could be an alternative. Regards, Igor. On 10/04/2011 05:45 PM, Dan Loewenherz wrote: > Hi Igor, > > This is awesome. > > Quick questions: What is the feature parity with psycopg2? Are there > any large components that aren't up to speed and need working on? > > Thanks! > Dan > > mobile 786-201-1161 | web http://dlo.me/ | twitter @dwlz > > > On Tue, Oct 4, 2011 at 3:07 AM, Igor Katson > wrote: > > Hi, pypy developers. > > check out pypq, a dbapi 2 PostgreSQL driver working with pypy > http://pypi.python.org/pypi/pypq > > I've made it to test how will my existing django sites work with pypy > > I will be glad for any help. > Feel free to contact me if you want to participate. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Oct 4 23:38:49 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 4 Oct 2011 23:38:49 +0200 Subject: [pypy-dev] PyPQ: a dbapi 2 PostgreSQL driver working with pypy In-Reply-To: <4E8B1D8E.70101@gmail.com> References: <4E8ADAD8.5070701@gmail.com> <4E8B1D8E.70101@gmail.com> Message-ID: On Tue, Oct 4, 2011 at 4:51 PM, Igor Katson wrote: > Hi, Dan, > before answering I'll describe the situation a bit. > > there was a question today, if I know about pg8000 or psycopg2ct. > > As for pg8000, pure python should be slower than ctypes, anyway, so I don't > think these two should be compared. [citation needed] From igor.katson at gmail.com Wed Oct 5 12:59:53 2011 From: igor.katson at gmail.com (Igor Katson) Date: Wed, 05 Oct 2011 14:59:53 +0400 Subject: [pypy-dev] PyPQ: a dbapi 2 PostgreSQL driver working with pypy In-Reply-To: References: <4E8ADAD8.5070701@gmail.com> <4E8B1D8E.70101@gmail.com> Message-ID: <4E8C38A9.9000203@gmail.com> So I've launched a few tests for all PostgreSQL python drivers out there, and optimized PyPq a bit along the way. *IntergerInsert* test does this 30000 times (i wanted more, but pg8000 bloats postgres memory usage to 1,5 gigabytes on this somewhy, so i lowered the amount of queries to 30000) cursor.executemany('insert into test_table values(%s, %s, %s, %s)', [(1,2,3,4)] * 30000) *IntegerSelect* selects this data back into python *VariableDataInsert* does the same as IntegerInsert, but inserts a string, a datetime, a date and a timestamp into the database (except for pg8000, which told me that it did not support timestamps) *VariableDataSelect* selects this data back into python cPython 2.7.2 (32-bit, archlinux latest build), 30000 inserts Psycopg2IntegerInsert.test_insert took 1.78s .Psycopg2IntegerSelect.test_select took 0.06s .Psycopg2VariableDataInsert.test_insert took 2.57s .Psycopg2VariableDataSelect.test_select took 0.25s .Psycopg2ctIntegerInsert.test_insert took 4.46s .Psycopg2ctIntegerSelect.test_select took 1.62s .Psycopg2ctVariableDataInsert.test_insert took 6.00s .Psycopg2ctVariableDataSelect.test_select took 3.31s .PyPQIntegerInsertTest.test_insert took 3.41s .PyPQIntegerSelectTest.test_select took 0.84s .PyPQVariableDataInsertTest.test_insert took 4.07s .PyPQVariableDataSelectTest.test_select took 3.70s pg8000IntegerInsert.test_insert took 16.20s .pg8000IntegerSelect.test_select took 1.43s .pg8000VariableDataInsert.test_insert took 18.00s .pg8000VariableDataSelect.test_select took 2.17s PyPy 1.6.0 (32-bit, archlinux latest build), 30000 inserts Psycopg2ctIntegerInsert.test_insert took 2.69s .Psycopg2ctIntegerSelect.test_select took 0.63s .Psycopg2ctVariableDataInsert.test_insert took 4.53s .Psycopg2ctVariableDataSelect.test_select took 1.36s .PyPQIntegerInsertTest.test_insert took 4.61s .PyPQIntegerSelectTest.test_select took 0.37s .PyPQVariableDataInsertTest.test_insert took 4.48s .PyPQVariableDataSelectTest.test_select took 1.58s pg8000IntegerInsert.test_insert took 8.34s .pg8000IntegerSelect.test_select took 0.60s .pg8000VariableDataInsert.test_insert took 9.15s .pg8000VariableDataSelect.test_select took 1.64s As we can see, pg8000 is slow on inserts, and as i've said, it does some strange things to my postgres, bloating the postgres memory usage to 1.5 gigabytes (i tried to insert 100000 records with executemany) On cPython, pypq is faster than psycopg2ct and pg8000, except for VariableDataSelect test. On PyPy, all of them get faster, except pypq, though it is still a bit faster than psycopg2ct in 2 tests. Next, I tested pypq side by side to see the difference more clearly. Here are the results. cPython 2.7.2 (32-bit, archlinux latest build), 200000 inserts Psycopg2IntegerInsert.test_insert took 12.22s .Psycopg2IntegerSelect.test_select took 0.39s .Psycopg2VariableDataInsert.test_insert took 17.30s .Psycopg2VariableDataSelect.test_select took 1.71s .Psycopg2ctIntegerInsert.test_insert took 28.56s .Psycopg2ctIntegerSelect.test_select took 10.48s .Psycopg2ctVariableDataInsert.test_insert took 38.53s .Psycopg2ctVariableDataSelect.test_select took 21.67s .PyPQIntegerInsertTest.test_insert took 22.53s .PyPQIntegerSelectTest.test_select took 5.59s .PyPQVariableDataInsertTest.test_insert took 26.86s .PyPQVariableDataSelectTest.test_select took 24.84s PyPy 1.6.0 (32-bit, archlinux latest build), 200000 inserts Psycopg2ctIntegerInsert.test_insert took 14.11s .Psycopg2ctIntegerSelect.test_select took 3.18s .Psycopg2ctVariableDataInsert.test_insert took 29.36s .Psycopg2ctVariableDataSelect.test_select took 7.78s .PyPQIntegerInsertTest.test_insert took 25.91s .PyPQIntegerSelectTest.test_select took 1.92s .PyPQVariableDataInsertTest.test_insert took 30.31s .PyPQVariableDataSelectTest.test_select took 8.73s On 10/05/2011 01:38 AM, Maciej Fijalkowski wrote: > On Tue, Oct 4, 2011 at 4:51 PM, Igor Katson wrote: >> Hi, Dan, >> before answering I'll describe the situation a bit. >> >> there was a question today, if I know about pg8000 or psycopg2ct. >> >> As for pg8000, pure python should be slower than ctypes, anyway, so I don't >> think these two should be compared. > [citation needed] From ram at rachum.com Wed Oct 5 13:44:14 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 13:44:14 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP Message-ID: Hey guys, I upgraded to PyPy 1.6 on my 2 Windows XP 32 bit machines. It crashes on both system when running the GarlicSim test suite. It shows a Windows error dialog saying "pypy.exe has encountered a problem and needs to close. We are sorry for the inconvenience." and giving this data: AppName: pypy.exe AppVer: 0.0.0.0 ModName: kernel32.dll ModVer: 5.1.2600.5512 Offset: 00040b0d I can also open a dialog with a lot of data on the error (don't know how useful it is) but Windows won't let me Ctrl-C it. What can I do? Thanks, Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.pyattaev at gmail.com Wed Oct 5 13:49:54 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Wed, 05 Oct 2011 14:49:54 +0300 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: Message-ID: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> 1. Switch to linux. It helps. 2. To get a meaningful error log try to run the pypy from terminal. To do copy-paste you will need 3-rd party terminal, i.e. power shell. Then you'll be able to copy the error messages. Without them it is pretty much impossible to identify the problem. 3. Another option is to use debugger to run the suite step-by-step and see what happens. Choose your poison. Alex. On Wednesday 05 October 2011 13:44:14 Ram Rachum wrote: > Hey guys, > > I upgraded to PyPy 1.6 on my 2 Windows XP 32 bit machines. It crashes on > both system when running the GarlicSim test suite. > > It shows a Windows error dialog saying "pypy.exe has encountered a problem > and needs to close. We are sorry for the inconvenience." and giving this > data: > > AppName: pypy.exe AppVer: 0.0.0.0 ModName: kernel32.dll > ModVer: 5.1.2600.5512 Offset: 00040b0d > > > I can also open a dialog with a lot of data on the error (don't know how > useful it is) but Windows won't let me Ctrl-C it. > > > What can I do? > > > Thanks, > Ram. From amauryfa at gmail.com Wed Oct 5 13:54:48 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 13:54:48 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: Message-ID: 2011/10/5 Ram Rachum : > Hey guys, > I upgraded to PyPy 1.6 on my 2 Windows XP 32 bit machines. It crashes on > both system when running the GarlicSim test suite. > It shows a Windows error dialog saying "pypy.exe has encountered a problem > and needs to close. ?We are sorry for the inconvenience." and giving this > data: > > AppName: pypy.exe AppVer: 0.0.0.0 ModName: kernel32.dll > ModVer: 5.1.2600.5512 Offset: 00040b0d > > I can also open a dialog with a lot of data on the error (don't know how > useful it is) but Windows won't let me Ctrl-C it. > > What can I do? pypy is a console application, i.e. it opens a Console window. You should be able to "Select all" and "Copy" all the content (click on the top-left icon) -- Amaury Forgeot d'Arc From ram at rachum.com Wed Oct 5 13:57:58 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 13:57:58 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> References: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 1:49 PM, Alex Pyattaev wrote: > 1. Switch to linux. It helps. > Not funny. 2. To get a meaningful error log try to run the pypy from terminal. To do > copy-paste you will need 3-rd party terminal, i.e. power shell. Then you'll > be > able to copy the error messages. Without them it is pretty much impossible > to > identify the problem. > I am running PyPy from terminal, bash provided by msys. The error still comes up in a dialog and the shell contains only the output from `nose` up to the failure, with no word on the failure. 3. Another option is to use debugger to run the suite step-by-step and see > what happens. > I'll give that a try. > > Choose your poison. > Alex. > On Wednesday 05 October 2011 13:44:14 Ram Rachum wrote: > > Hey guys, > > > > I upgraded to PyPy 1.6 on my 2 Windows XP 32 bit machines. It crashes on > > both system when running the GarlicSim test suite. > > > > It shows a Windows error dialog saying "pypy.exe has encountered a > problem > > and needs to close. We are sorry for the inconvenience." and giving this > > data: > > > > AppName: pypy.exe AppVer: 0.0.0.0 ModName: kernel32.dll > > ModVer: 5.1.2600.5512 Offset: 00040b0d > > > > > > I can also open a dialog with a lot of data on the error (don't know how > > useful it is) but Windows won't let me Ctrl-C it. > > > > > > What can I do? > > > > > > Thanks, > > Ram. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 14:07:20 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 14:07:20 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> Message-ID: 2011/10/5 Ram Rachum : > I am running PyPy from terminal, bash provided by msys. The error still > comes up in a dialog and the shell contains only the output from `nose` up > to the failure, with no word on the failure. Can you still see which test fails? and then add print statements to determine the exact location of the crash? -- Amaury Forgeot d'Arc From ram at rachum.com Wed Oct 5 14:10:21 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 14:10:21 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 2:07 PM, Amaury Forgeot d'Arc wrote: > 2011/10/5 Ram Rachum : > > I am running PyPy from terminal, bash provided by msys. The error still > > comes up in a dialog and the shell contains only the output from `nose` > up > > to the failure, with no word on the failure. > > Can you still see which test fails? > and then add print statements to determine the exact location of the crash? > > -- > Amaury Forgeot d'Arc > I have hundreds of tests, and PyPy fails before a single one begins. It seems that PyPy crashes somewhere in nose's initialization. Isn't there a way to find the last Python line run before the crash without stepping with a finer granularity every time? Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 14:11:56 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 14:11:56 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> Message-ID: 2011/10/5 Ram Rachum : > I have hundreds of tests, and PyPy fails before a single one begins. It > seems that PyPy crashes somewhere in nose's initialization. > Isn't there a way to find the last Python line run before the crash without > stepping with a finer granularity every time? Not without a debugger, I'm afraid -- Amaury Forgeot d'Arc From ram at rachum.com Wed Oct 5 14:18:08 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 14:18:08 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 2:11 PM, Amaury Forgeot d'Arc wrote: > 2011/10/5 Ram Rachum : > > I have hundreds of tests, and PyPy fails before a single one begins. It > > seems that PyPy crashes somewhere in nose's initialization. > > Isn't there a way to find the last Python line run before the crash > without > > stepping with a finer granularity every time? > > Not without a debugger, I'm afraid > > -- > Amaury Forgeot d'Arc > How do I run the Nose test suite on Pypy with a debugger? I usually use Wing IDE, but it doesn't support PyPy. I'm also aware of Nose's `--pdb` flag which drops you into the debugger after an error, but it doesn't work here because this crash seems to be happening at a lower level. So I don't know how to start this in a debugger. Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 14:22:43 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 14:22:43 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> Message-ID: 2011/10/5 Ram Rachum : > How do I run the Nose test suite on Pypy with a debugger? I usually use Wing > IDE, but it doesn't support PyPy. I'm also aware of Nose's `--pdb` flag > which drops you into the debugger after an error, but it doesn't work here > because this crash seems to be happening at a lower level. So I don't know > how to start this in a debugger. A Python debugger won't help, since it runs in the same (segfaulting) process. You need a C-level debugger, i.e. Visual Studio. -- Amaury Forgeot d'Arc From alex.pyattaev at gmail.com Wed Oct 5 14:23:15 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Wed, 05 Oct 2011 15:23:15 +0300 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: Message-ID: <2356196.Uj9cm3tQAm@hunter-laptop.tontut.fi> Generally, any binary-level debugger such as gdb or MSVC should work with pypy. At the very least you will find which operation crashed. If it is something really specific, for example sin/log/sign, it might be quite easy to map it back to python code. If it is not, it will be nearly impossible to find the original source line (at least I've failed when I tried). Another option is to edit the sources of the test suite adding print statements incrementally until you spot the place where it crashes. It is a slow, but very reliable way. That is of course if it is a particular segment of python code that crashes it. Also, could you send your exact environtment specs? I'll try to replicate it on a VM and see if it crashes for me too. I have an XP VM somewhere. PS: Sorry for my stupid joke about switching to linux. It was meant to cheer you up a bit. Alex. On Wednesday 05 October 2011 14:18:08 Ram Rachum wrote: > On Wed, Oct 5, 2011 at 2:11 PM, Amaury Forgeot d'Arc wrote: > > 2011/10/5 Ram Rachum : > > > I have hundreds of tests, and PyPy fails before a single one begins. > > > It > > > seems that PyPy crashes somewhere in nose's initialization. > > > Isn't there a way to find the last Python line run before the crash > > > > without > > > > > stepping with a finer granularity every time? > > > > Not without a debugger, I'm afraid > > > > -- > > Amaury Forgeot d'Arc > > How do I run the Nose test suite on Pypy with a debugger? I usually use Wing > IDE, but it doesn't support PyPy. I'm also aware of Nose's `--pdb` flag > which drops you into the debugger after an error, but it doesn't work here > because this crash seems to be happening at a lower level. So I don't know > how to start this in a debugger. > > > Ram. From ram at rachum.com Wed Oct 5 14:27:45 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 14:27:45 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2039618.cn0o3l27J1@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 2:22 PM, Amaury Forgeot d'Arc wrote: > 2011/10/5 Ram Rachum : > > How do I run the Nose test suite on Pypy with a debugger? I usually use > Wing > > IDE, but it doesn't support PyPy. I'm also aware of Nose's `--pdb` flag > > which drops you into the debugger after an error, but it doesn't work > here > > because this crash seems to be happening at a lower level. So I don't > know > > how to start this in a debugger. > > A Python debugger won't help, since it runs in the same (segfaulting) > process. > You need a C-level debugger, i.e. Visual Studio. > > -- > Amaury Forgeot d'Arc > I don't know how to use that... I don't program C at all, only Python. If I use a Python debugger, can't I just step forward line by line, see where I get the crash, and then isolate the offending line? Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.pyattaev at gmail.com Wed Oct 5 14:34:05 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Wed, 05 Oct 2011 05:34:05 -0700 (PDT) Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: Message-ID: <15506692.5bAFRbJOpC@hunter-laptop.tontut.fi> > If I use a Python debugger, can't I just step forward line by line, see > where I get the crash, and then isolate the offending line? The way pypy works - no you can not really do that. In Cpython it works somewhat better, but not in PYPY. Basically you have to use C debugger to locate the crash point, because the program that crashes can not really identify where it crashed precisely (well it can but pypy detects crash points very unprecisely). So in order to figure where exactly it crashes you have to use C debugger. OR, as I have said, add print statements until you localize the bug. *This may not work in some cases, as adding print's actually modifies the program=> it might not crash at all* Alex. On Wednesday 05 October 2011 14:27:45 Ram Rachum wrote: From amauryfa at gmail.com Wed Oct 5 14:50:05 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 14:50:05 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: <15506692.5bAFRbJOpC@hunter-laptop.tontut.fi> References: <15506692.5bAFRbJOpC@hunter-laptop.tontut.fi> Message-ID: 2011/10/5 Alex Pyattaev : > In Cpython it works > somewhat better, but not in PYPY How is CPython behavior better with segfaults? -- Amaury Forgeot d'Arc From fuzzyman at gmail.com Wed Oct 5 15:20:20 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Wed, 5 Oct 2011 14:20:20 +0100 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <15506692.5bAFRbJOpC@hunter-laptop.tontut.fi> Message-ID: On 5 October 2011 13:50, Amaury Forgeot d'Arc wrote: > 2011/10/5 Alex Pyattaev : > > In Cpython it works > > somewhat better, but not in PYPY > > How is CPython behavior better with segfaults? > > Well, in recent versions it gained the faulthandler module... :-) http://pypi.python.org/pypi/faulthandler/ Michael > -- > Amaury Forgeot d'Arc > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 15:22:30 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 15:22:30 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <15506692.5bAFRbJOpC@hunter-laptop.tontut.fi> Message-ID: 2011/10/5 Michael Foord : > Well, in recent versions it gained the faulthandler module... :-) > > http://pypi.python.org/pypi/faulthandler/ Yes, and this was a *great* improvement! I hope we will be able to adapt it to pypy. -- Amaury Forgeot d'Arc From max.lavrenov at gmail.com Wed Oct 5 15:43:45 2011 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Wed, 5 Oct 2011 17:43:45 +0400 Subject: [pypy-dev] strange error in urlunparse Message-ID: Hello all. I make some experiments with pypy in our twisted project and get strange error in standard urlparse.urlunparse function def urlunparse(data): """Put a parsed URL back together again. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had redundant delimiters, e.g. a ? with an empty query (the draft states that these are equivalent).""" scheme, netloc, url, params, query, fragment = data if params: url = "%s;%s" % (url, params) return urlunsplit((scheme, netloc, url, query, fragment)) If i try using apache benchmark , after ~500 successfull responses i am starting to get error on line "if params": (216 line in real urlparse.py file) with message "local variable 'params' referenced before assignment". How it's possible? My pypy version is "Python 2.7.1 (7acf2b8fcafd, Sep 26 2011, 11:38:29) [PyPy 1.6.1-dev0 with GCC 4.6.1] on linux2" Best regards, Max Lavrenov -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.curtin at gmail.com Wed Oct 5 15:49:53 2011 From: brian.curtin at gmail.com (Brian Curtin) Date: Wed, 5 Oct 2011 08:49:53 -0500 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <15506692.5bAFRbJOpC@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 08:22, Amaury Forgeot d'Arc wrote: > 2011/10/5 Michael Foord : >> Well, in recent versions it gained the faulthandler module... :-) >> >> http://pypi.python.org/pypi/faulthandler/ > > Yes, and this was a *great* improvement! > I hope we will be able to adapt it to pypy. It may be early to start bringing this up, but I recently created an extension to catch crashes in C code and write minidumps: https://bitbucket.org/briancurtin/minidumper. It's 3.x only at the moment, although I'll be backporting within the next few days. I'll have a look at how/if it works on PyPy once I get there. (minidumper will likely be absorbed into faulthandler once it becomes more complete, which is why I mention it) From ram at rachum.com Wed Oct 5 16:11:15 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 16:11:15 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: <2356196.Uj9cm3tQAm@hunter-laptop.tontut.fi> References: <2356196.Uj9cm3tQAm@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 2:23 PM, Alex Pyattaev wrote: > Generally, any binary-level debugger such as gdb or MSVC should work with > pypy. At the very least you will find which operation crashed. > As I said to Amaury, I don't know how to use those... Python is the only language I program in. > If it is something really specific, for example > sin/log/sign, it might be quite easy to map it back to python code. If it > is > not, it will be nearly impossible to find the original source line (at > least > I've failed when I tried). > > Another option is to edit the sources of the test suite adding print > statements incrementally until you spot the place where it crashes. It is a > slow, but very reliable way. That is of course if it is a particular > segment > of python code that crashes it. > I'll try, thanks. > > Also, could you send your exact environtment specs? I'll try to replicate > it > on a VM and see if it crashes for me too. I have an XP VM somewhere. > What specs do you mean? It's just the recent PyPy 1.6 on a Windows XP SP3 32 bit machine with minimal packages installed. (distribute, pip, nosetests.) Let me know if you need any more data. > > PS: Sorry for my stupid joke about switching to linux. It was meant to > cheer > you up a bit. > Forgiven :) > Alex. > > On Wednesday 05 October 2011 14:18:08 Ram Rachum wrote: > > On Wed, Oct 5, 2011 at 2:11 PM, Amaury Forgeot d'Arc > wrote: > > > 2011/10/5 Ram Rachum : > > > > I have hundreds of tests, and PyPy fails before a single one begins. > > > > It > > > > seems that PyPy crashes somewhere in nose's initialization. > > > > Isn't there a way to find the last Python line run before the crash > > > > > > without > > > > > > > stepping with a finer granularity every time? > > > > > > Not without a debugger, I'm afraid > > > > > > -- > > > Amaury Forgeot d'Arc > > > > How do I run the Nose test suite on Pypy with a debugger? I usually use > Wing > > IDE, but it doesn't support PyPy. I'm also aware of Nose's `--pdb` flag > > which drops you into the debugger after an error, but it doesn't work > here > > because this crash seems to be happening at a lower level. So I don't > know > > how to start this in a debugger. > > > > > > Ram. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 16:14:54 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 16:14:54 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <15506692.5bAFRbJOpC@hunter-laptop.tontut.fi> Message-ID: 2011/10/5 Brian Curtin : > On Wed, Oct 5, 2011 at 08:22, Amaury Forgeot d'Arc wrote: >> 2011/10/5 Michael Foord : >>> Well, in recent versions it gained the faulthandler module... :-) >>> >>> http://pypi.python.org/pypi/faulthandler/ >> >> Yes, and this was a *great* improvement! >> I hope we will be able to adapt it to pypy. > > It may be early to start bringing this up, but I recently created an > extension to catch crashes in C code and write minidumps: > https://bitbucket.org/briancurtin/minidumper. It's 3.x only at the > moment, although I'll be backporting within the next few days. I'll > have a look at how/if it works on PyPy once I get there. The code is quite simple, and should probably compile as is with pypy (except for the PyModuleDef of course) but it would be as simple to rewrite it in RPython. btw there is a suspect thing in your code: "app_name = pyname" will store the char* buffer into a static variable, but nothing guarantees that the PyObject passed will stay alive! -- Amaury Forgeot d'Arc From amauryfa at gmail.com Wed Oct 5 16:30:50 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 16:30:50 +0200 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: 2011/10/5 Max Lavrenov : > after ~500 successfull responses i am starting to get error on line "if > params" Looks like a JIT error to me, which has a default threshold of 1000 iterations. Can you try again with pypy --jit threshold=-1 to completely disable the JIT? -- Amaury Forgeot d'Arc From max.lavrenov at gmail.com Wed Oct 5 17:03:35 2011 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Wed, 5 Oct 2011 19:03:35 +0400 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: I could try writing small application for reproduce this error. It happens in twisted function twisted.web.client.getPage Or may be i shoud try another version of pypy or twisted ( i use trunk version of twisted ) ? On Wed, Oct 5, 2011 at 18:57, Max Lavrenov wrote: > Hello Amaury > Thanks for response. > > No, i got same error with this line > python --jit threshold=-1 /home/e-max/.virtualenvs/pypy/bin/twistd -n > dalight > > But If i change urlunparse function and try printing date variable before, > all starts work correctly. > > def urlunparse(data): > """Put a parsed URL back together again. This may result in a > slightly different, but equivalent URL, if the URL that was parsed > originally had redundant delimiters, e.g. a ? with an empty query > (the draft states that these are equivalent).""" > print data > > scheme, netloc, url, params, query, fragment = data > if params: > url = "%s;%s" % (url, params) > return urlunsplit((scheme, netloc, url, query, fragment)) > > > On Wed, Oct 5, 2011 at 18:30, Amaury Forgeot d'Arc wrote: > >> 2011/10/5 Max Lavrenov : >> > after ~500 successfull responses i am starting to get error on line "if >> > params" >> >> Looks like a JIT error to me, which has a default threshold of 1000 >> iterations. >> Can you try again with >> pypy --jit threshold=-1 >> to completely disable the JIT? >> >> -- >> Amaury Forgeot d'Arc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lavrenov at gmail.com Wed Oct 5 16:57:43 2011 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Wed, 5 Oct 2011 18:57:43 +0400 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: Hello Amaury Thanks for response. No, i got same error with this line python --jit threshold=-1 /home/e-max/.virtualenvs/pypy/bin/twistd -n dalight But If i change urlunparse function and try printing date variable before, all starts work correctly. def urlunparse(data): """Put a parsed URL back together again. This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had redundant delimiters, e.g. a ? with an empty query (the draft states that these are equivalent).""" print data scheme, netloc, url, params, query, fragment = data if params: url = "%s;%s" % (url, params) return urlunsplit((scheme, netloc, url, query, fragment)) On Wed, Oct 5, 2011 at 18:30, Amaury Forgeot d'Arc wrote: > 2011/10/5 Max Lavrenov : > > after ~500 successfull responses i am starting to get error on line "if > > params" > > Looks like a JIT error to me, which has a default threshold of 1000 > iterations. > Can you try again with > pypy --jit threshold=-1 > to completely disable the JIT? > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 17:55:37 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 17:55:37 +0200 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: 2011/10/5 Max Lavrenov : > I could try writing small application for reproduce this error.? It happens > in twisted function twisted.web.client.getPage > Or may be i shoud try another version of pypy or twisted ( i use trunk > version of twisted ) ? Yes, please try to find a minimal example that shows the issue. -- Amaury Forgeot d'Arc From ram at rachum.com Wed Oct 5 18:17:40 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 18:17:40 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2356196.Uj9cm3tQAm@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 4:11 PM, Ram Rachum wrote: > On Wed, Oct 5, 2011 at 2:23 PM, Alex Pyattaev wrote: > >> Another option is to edit the sources of the test suite adding print >> statements incrementally until you spot the place where it crashes. It is >> a >> slow, but very reliable way. That is of course if it is a particular >> segment >> of python code that crashes it. >> > > I'll try, thanks. > > Okay, I've spent a few hours print-debugging, and I think I've almost got it. The crash happens on a line: st = os.stat(s) inside `os.path.isdir`, where `s` is a string 'C:\\Documents and Settings\\User\\My Documents\\Python Projects\\GarlicSim\\garlicsim\\src' This is a directory that happens not to exist, but of course this is not a good reason to crash. I have tried running `os.stat(s)` in the PyPy shell with that same `s`, but didn't get a crash there. I don't know why it's crashing in Nose but not in the shell. Does anyone have a clue? Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 18:51:55 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 18:51:55 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2356196.Uj9cm3tQAm@hunter-laptop.tontut.fi> Message-ID: 2011/10/5 Ram Rachum : > Okay, I've spent a few hours print-debugging, and I think I've almost got > it. > The crash happens on a line: > ? ? st = os.stat(s) > inside `os.path.isdir`, where `s` is a string 'C:\\Documents and > Settings\\User\\My Documents\\Python Projects\\GarlicSim\\garlicsim\\src' > This is a directory that happens not to exist, but of course this is not a > good reason to crash. > I have tried running `os.stat(s)` in the PyPy shell with that same `s`, but > didn't get a crash there. I don't know why it's crashing in Nose but not in > the shell. > > Does anyone have a clue? it's possible that it's a RPython-level exception, or a bad handle because too many files wait for the garbage collector to close them. Can you give more information about the crash itself? - What are the last lines printed in the console? Try to disable "stdout capture" in Nose, by passing the -s option. - after the pypy process has exited, type "echo %ERRORLEVEL%" in the same console, to print the exit code of the last process. Which number is it? -- Amaury Forgeot d'Arc From ram at rachum.com Wed Oct 5 19:37:07 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 19:37:07 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <2356196.Uj9cm3tQAm@hunter-laptop.tontut.fi> Message-ID: On Wed, Oct 5, 2011 at 6:51 PM, Amaury Forgeot d'Arc wrote: > 2011/10/5 Ram Rachum : > > Okay, I've spent a few hours print-debugging, and I think I've almost got > > it. > > The crash happens on a line: > > st = os.stat(s) > > inside `os.path.isdir`, where `s` is a string 'C:\\Documents and > > Settings\\User\\My Documents\\Python Projects\\GarlicSim\\garlicsim\\src' > > This is a directory that happens not to exist, but of course this is not > a > > good reason to crash. > > I have tried running `os.stat(s)` in the PyPy shell with that same `s`, > but > > didn't get a crash there. I don't know why it's crashing in Nose but not > in > > the shell. > > > > Does anyone have a clue? > > it's possible that it's a RPython-level exception, or a bad handle because > too many files wait for the garbage collector to close them. > > Can you give more information about the crash itself? > - What are the last lines printed in the console? Try to disable > "stdout capture" in Nose, by passing the -s option. > This is the entire output: Preparing to run tests using Python 2.7.1 (080f42d5c4b4, Aug 23 2011, 11:41:11) [PyPy 1.6.0 with MSC v.1500 32 bit] Running tests directly from GarlicSim repo. Pypy doesn't have wxPython, not loading `garlicsim_wx` tests. nose.config: INFO: Set working dir to C:\Documents and Settings\User\My Documents\Python Projects\GarlicSim nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$'] nose.plugins.cover: INFO: Coverage report will include only packages: ['garlicsim', 'garlicsim_lib', 'garlicsim_wx', 'test_garlicsim', 'test_garlicsim_lib', 'test_garlicsim_wx', 'garlicsim', 'garlicsim_lib', 'garlicsim_wx', 'test_garlicsim', 'test_garlicsim_lib', 'test_garlicsim_wx', 'garlicsim', 'garlicsim_lib', 'garlicsim_wx', 'test_garlicsim', 'test_garlicsim_lib', 'test_garlicsim_wx'] - after the pypy process has exited, type "echo %ERRORLEVEL%" in the > same console, to print the exit code > of the last process. Which number is it? > -1073741819 > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Wed Oct 5 20:36:11 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 05 Oct 2011 20:36:11 +0200 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: <4E8CA39B.3080506@gmail.com> On 05/10/11 16:30, Amaury Forgeot d'Arc wrote: > 2011/10/5 Max Lavrenov: >> after ~500 successfull responses i am starting to get error on line "if >> params" > > Looks like a JIT error to me, which has a default threshold of 1000 iterations. > Can you try again with > pypy --jit threshold=-1 > to completely disable the JIT? --jit threshold=-1 only disables the jit for loops, but it will still compile functions. To completely disable the jit you should use --jit off. Max, could you please try again with --jit off? Your problems reminds me of a bug that I encountered ~1 year ago: UnboundLocalError which disappears if I print it. I don't remember the details, but in that case it was a bug in the JIT. ciao, Anto From max.lavrenov at gmail.com Wed Oct 5 20:55:00 2011 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Wed, 5 Oct 2011 22:55:00 +0400 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: <4E8CA39B.3080506@gmail.com> References: <4E8CA39B.3080506@gmail.com> Message-ID: Hi Antonio You were right, with --jit off project works just fine. I will post simple project to reproduce this bug tomorrow. Best regards, Max On Wed, Oct 5, 2011 at 22:36, Antonio Cuni wrote: > On 05/10/11 16:30, Amaury Forgeot d'Arc wrote: > >> 2011/10/5 Max Lavrenov >> >: >> >>> after ~500 successfull responses i am starting to get error on line "if >>> params" >>> >> >> Looks like a JIT error to me, which has a default threshold of 1000 >> iterations. >> Can you try again with >> pypy --jit threshold=-1 >> to completely disable the JIT? >> > > --jit threshold=-1 only disables the jit for loops, but it will still > compile functions. > > To completely disable the jit you should use --jit off. > > Max, could you please try again with --jit off? > Your problems reminds me of a bug that I encountered ~1 year ago: > UnboundLocalError which disappears if I print it. > > I don't remember the details, but in that case it was a bug in the JIT. > > ciao, > Anto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ram at rachum.com Wed Oct 5 21:06:03 2011 From: ram at rachum.com (Ram Rachum) Date: Wed, 5 Oct 2011 21:06:03 +0200 Subject: [pypy-dev] gettrace? Message-ID: Hey guys, I notice that PyPy doesn't offer a `sys.gettrace` function, given that it is an implementation detail. Is there any other way to do the equivalent of `sys.gettrace` in PyPy? Thanks, Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Oct 5 21:54:45 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 21:54:45 +0200 Subject: [pypy-dev] gettrace? In-Reply-To: References: Message-ID: 2011/10/5 Ram Rachum : > Hey guys, > I notice that PyPy doesn't offer a `sys.gettrace` function, given that it is > an implementation detail. > Is there any other way to do the equivalent of?`sys.gettrace` in PyPy? Hum, it was probably ovelooked. Seems trivial to implement, will give a try. -- Amaury Forgeot d'Arc From arigo at tunes.org Wed Oct 5 22:06:52 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 5 Oct 2011 22:06:52 +0200 Subject: [pypy-dev] gettrace? In-Reply-To: References: Message-ID: Hi Amaury, On Wed, Oct 5, 2011 at 21:54, Amaury Forgeot d'Arc wrote: >> Is there any other way to do the equivalent of?`sys.gettrace` in PyPy? > > Hum, it was probably ovelooked. > Seems trivial to implement, will give a try. Indeed, sys.gettrace() and sys.getprofile() were added in Python 2.6, with proper unit tests in test_sys_settrace.py. We should try to understand why this test didn't fail so far on PyPy... Do we run it at all? If not, it's kind of bad. Armin From amauryfa at gmail.com Wed Oct 5 22:21:32 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 5 Oct 2011 22:21:32 +0200 Subject: [pypy-dev] gettrace? In-Reply-To: References: Message-ID: 2011/10/5 Armin Rigo : > Indeed, sys.gettrace() and sys.getprofile() were added in Python 2.6, > with proper unit tests in test_sys_settrace.py. ?We should try to > understand why this test didn't fail so far on PyPy... ?Do we run it > at all? ?If not, it's kind of bad. test_sys_settrace.py runs correctly, but the function is named "set_and_retrieve_func"... without the test_ prefix! Fixed in revision 383ca802ba07, will fix in cpython as well. -- Amaury Forgeot d'Arc From alex.pyattaev at gmail.com Thu Oct 6 00:02:10 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Thu, 06 Oct 2011 01:02:10 +0300 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: Message-ID: <4250804.t7WuOKXY7P@hunter-laptop.tontut.fi> I've had a very similar stuff, as in something crashing only when run many times when I had a bug in a container type implemented in C. Basically, I had wrong refcount for the objets, which caused them to be freed by garbage collectore while they have been still used. Maybe something similar happens in the code that wraps windows API that handles file opening. That would explain why the bug never happens on linux. A good candidate would be incorrect refcount for the return value if the file does not exist. Try something like this: s="some_file" rets=[] for i in range(1000): rets.append(os.stat(s)) gc.collect() #Do something that uses lots of RAM (but a random amount, preferably in small blocks) print rets if it crashes then you have exactly that issue. 1000 might be not enough to toggle the crash though, as you need the OS to actually allocate different segments of memory for this to work. The more RAM you have the more cycles you need to toggle the crash. At least this approach helped me to debug extension modules written in C. BTW, for me on VM the test case does not crash. But I have SP2 windows there. On Wednesday 05 October 2011 19:37:07 Ram Rachum wrote: > On Wed, Oct 5, 2011 at 6:51 PM, Amaury Forgeot d'Arc wrote: > > 2011/10/5 Ram Rachum : > > > Okay, I've spent a few hours print-debugging, and I think I've > > > almost got it. > > > > > > The crash happens on a line: > > > st = os.stat(s) > > > > > > inside `os.path.isdir`, where `s` is a string 'C:\\Documents and > > > Settings\\User\\My Documents\\Python > > > Projects\\GarlicSim\\garlicsim\\src' This is a directory that > > > happens not to exist, but of course this is not> > > a > > > > > good reason to crash. > > > I have tried running `os.stat(s)` in the PyPy shell with that same > > > `s`, > > > > but > > > > > didn't get a crash there. I don't know why it's crashing in Nose but > > > not> > > in > > > > > the shell. > > > > > > Does anyone have a clue? > > > > it's possible that it's a RPython-level exception, or a bad handle > > because too many files wait for the garbage collector to close them. > > > > Can you give more information about the crash itself? > > - What are the last lines printed in the console? Try to disable > > "stdout capture" in Nose, by passing the -s option. > > This is the entire output: > > Preparing to run tests using Python 2.7.1 (080f42d5c4b4, Aug 23 2011, > 11:41:11) > [PyPy 1.6.0 with MSC v.1500 32 bit] > Running tests directly from GarlicSim repo. > Pypy doesn't have wxPython, not loading `garlicsim_wx` tests. > nose.config: INFO: Set working dir to C:\Documents and Settings\User\My > Documents\Python Projects\GarlicSim > nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$'] > nose.plugins.cover: INFO: Coverage report will include only packages: > ['garlicsim', 'garlicsim_lib', 'garlicsim_wx', 'test_garlicsim', > 'test_garlicsim_lib', 'test_garlicsim_wx', 'garlicsim', 'garlicsim_lib', > 'garlicsim_wx', 'test_garlicsim', 'test_garlicsim_lib', 'test_garlicsim_wx', > 'garlicsim', 'garlicsim_lib', 'garlicsim_wx', 'test_garlicsim', > 'test_garlicsim_lib', 'test_garlicsim_wx'] > > > > > - after the pypy process has exited, type "echo %ERRORLEVEL%" in the > > > same console, to print the exit code > > of the last process. Which number is it? > > -1073741819 > > > -- > > Amaury Forgeot d'Arc From fijall at gmail.com Thu Oct 6 00:03:43 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 6 Oct 2011 00:03:43 +0200 Subject: [pypy-dev] PyPQ: a dbapi 2 PostgreSQL driver working with pypy In-Reply-To: <4E8C38A9.9000203@gmail.com> References: <4E8ADAD8.5070701@gmail.com> <4E8B1D8E.70101@gmail.com> <4E8C38A9.9000203@gmail.com> Message-ID: On Wed, Oct 5, 2011 at 12:59 PM, Igor Katson wrote: > So I've launched a few tests for all PostgreSQL python drivers > out there, and optimized PyPq a bit along the way. > > *IntergerInsert* test does this 30000 times (i wanted more, but pg8000 > bloats > postgres memory usage to 1,5 gigabytes on this somewhy, so i lowered the > amount > of queries to 30000) > > ? ?cursor.executemany('insert into test_table values(%s, %s, %s, %s)', > [(1,2,3,4)] * 30000) > > *IntegerSelect* selects this data back into python > > *VariableDataInsert* does the same as IntegerInsert, but inserts a string, a > datetime, a date and a timestamp into the database (except for pg8000, which > told > me that it did not support timestamps) > > *VariableDataSelect* selects this data back into python > > cPython 2.7.2 (32-bit, archlinux latest build), 30000 inserts > > Psycopg2IntegerInsert.test_insert took 1.78s > .Psycopg2IntegerSelect.test_select took 0.06s > .Psycopg2VariableDataInsert.test_insert took 2.57s > .Psycopg2VariableDataSelect.test_select took 0.25s > > .Psycopg2ctIntegerInsert.test_insert took 4.46s > .Psycopg2ctIntegerSelect.test_select took 1.62s > .Psycopg2ctVariableDataInsert.test_insert took 6.00s > .Psycopg2ctVariableDataSelect.test_select took 3.31s > > .PyPQIntegerInsertTest.test_insert took 3.41s > .PyPQIntegerSelectTest.test_select took 0.84s > .PyPQVariableDataInsertTest.test_insert took 4.07s > .PyPQVariableDataSelectTest.test_select took 3.70s > > pg8000IntegerInsert.test_insert took 16.20s > .pg8000IntegerSelect.test_select took 1.43s > .pg8000VariableDataInsert.test_insert took 18.00s > .pg8000VariableDataSelect.test_select took 2.17s > > PyPy 1.6.0 (32-bit, archlinux latest build), 30000 inserts > > Psycopg2ctIntegerInsert.test_insert took 2.69s > .Psycopg2ctIntegerSelect.test_select took 0.63s > .Psycopg2ctVariableDataInsert.test_insert took 4.53s > .Psycopg2ctVariableDataSelect.test_select took 1.36s > > .PyPQIntegerInsertTest.test_insert took 4.61s > .PyPQIntegerSelectTest.test_select took 0.37s > .PyPQVariableDataInsertTest.test_insert took 4.48s > .PyPQVariableDataSelectTest.test_select took 1.58s > > pg8000IntegerInsert.test_insert took 8.34s > .pg8000IntegerSelect.test_select took 0.60s > .pg8000VariableDataInsert.test_insert took 9.15s > .pg8000VariableDataSelect.test_select took 1.64s > > As we can see, pg8000 is slow on inserts, and as i've said, it does some > strange > things to my postgres, bloating the postgres memory usage to 1.5 gigabytes > (i tried > to insert 100000 records with executemany) > > On cPython, pypq is faster than psycopg2ct and pg8000, except for > VariableDataSelect > test. > On PyPy, all of them get faster, except pypq, though it is still a bit > faster > than psycopg2ct in 2 tests. > > Next, I tested pypq side by side to see the difference more clearly. > > Here are the results. > > cPython 2.7.2 (32-bit, archlinux latest build), 200000 inserts > > Psycopg2IntegerInsert.test_insert took 12.22s > .Psycopg2IntegerSelect.test_select took 0.39s > .Psycopg2VariableDataInsert.test_insert took 17.30s > .Psycopg2VariableDataSelect.test_select took 1.71s > > .Psycopg2ctIntegerInsert.test_insert took 28.56s > .Psycopg2ctIntegerSelect.test_select took 10.48s > .Psycopg2ctVariableDataInsert.test_insert took 38.53s > .Psycopg2ctVariableDataSelect.test_select took 21.67s > > .PyPQIntegerInsertTest.test_insert took 22.53s > .PyPQIntegerSelectTest.test_select took 5.59s > .PyPQVariableDataInsertTest.test_insert took 26.86s > .PyPQVariableDataSelectTest.test_select took 24.84s > > PyPy 1.6.0 (32-bit, archlinux latest build), 200000 inserts > > Psycopg2ctIntegerInsert.test_insert took 14.11s > .Psycopg2ctIntegerSelect.test_select took 3.18s > .Psycopg2ctVariableDataInsert.test_insert took 29.36s > .Psycopg2ctVariableDataSelect.test_select took 7.78s > > .PyPQIntegerInsertTest.test_insert took 25.91s > .PyPQIntegerSelectTest.test_select took 1.92s > .PyPQVariableDataInsertTest.test_insert took 30.31s > .PyPQVariableDataSelectTest.test_select took 8.73s > > On 10/05/2011 01:38 AM, Maciej Fijalkowski wrote: >> >> On Tue, Oct 4, 2011 at 4:51 PM, Igor Katson ?wrote: >>> >>> Hi, Dan, >>> before answering I'll describe the situation a bit. >>> >>> there was a question today, if I know about pg8000 or psycopg2ct. >>> >>> As for pg8000, pure python should be slower than ctypes, anyway, so I >>> don't >>> think these two should be compared. >> >> [citation needed] > > Great data! It probably does make sense to run each of pypy tests twice to see how much time is spent warming up the JIT, although definitely the data is very interesting already. Cheers, fijal From ram at rachum.com Thu Oct 6 00:09:01 2011 From: ram at rachum.com (Ram Rachum) Date: Thu, 6 Oct 2011 00:09:01 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: <4250804.t7WuOKXY7P@hunter-laptop.tontut.fi> References: <4250804.t7WuOKXY7P@hunter-laptop.tontut.fi> Message-ID: Can you fill in something for "#Do something that uses lots of RAM"? Because I'm not sure I'll get it right. On Thu, Oct 6, 2011 at 12:02 AM, Alex Pyattaev wrote: > I've had a very similar stuff, as in something crashing only when run many > times when I had a bug in a container type implemented in C. Basically, I > had > wrong refcount for the objets, which caused them to be freed by garbage > collectore while they have been still used. Maybe something similar happens > in > the code that wraps windows API that handles file opening. That would > explain > why the bug never happens on linux. A good candidate would be incorrect > refcount for the return value if the file does not exist. Try something > like > this: > s="some_file" > rets=[] > for i in range(1000): > rets.append(os.stat(s)) > gc.collect() > #Do something that uses lots of RAM (but a random amount, preferably > in > small blocks) > print rets > if it crashes then you have exactly that issue. 1000 might be not enough to > toggle the crash though, as you need the OS to actually allocate different > segments of memory for this to work. The more RAM you have the more cycles > you > need to toggle the crash. At least this approach helped me to debug > extension > modules written in C. > > BTW, for me on VM the test case does not crash. But I have SP2 windows > there. > On Wednesday 05 October 2011 19:37:07 Ram Rachum wrote: > > On Wed, Oct 5, 2011 at 6:51 PM, Amaury Forgeot d'Arc > wrote: > > > 2011/10/5 Ram Rachum : > > > > Okay, I've spent a few hours print-debugging, and I think I've > > > > almost got it. > > > > > > > > The crash happens on a line: > > > > st = os.stat(s) > > > > > > > > inside `os.path.isdir`, where `s` is a string 'C:\\Documents and > > > > Settings\\User\\My Documents\\Python > > > > Projects\\GarlicSim\\garlicsim\\src' This is a directory that > > > > happens not to exist, but of course this is not> > > > a > > > > > > > good reason to crash. > > > > I have tried running `os.stat(s)` in the PyPy shell with that same > > > > `s`, > > > > > > but > > > > > > > didn't get a crash there. I don't know why it's crashing in Nose but > > > > not> > > > in > > > > > > > the shell. > > > > > > > > Does anyone have a clue? > > > > > > it's possible that it's a RPython-level exception, or a bad handle > > > because too many files wait for the garbage collector to close them. > > > > > > Can you give more information about the crash itself? > > > - What are the last lines printed in the console? Try to disable > > > "stdout capture" in Nose, by passing the -s option. > > > > This is the entire output: > > > > Preparing to run tests using Python 2.7.1 (080f42d5c4b4, Aug 23 2011, > > 11:41:11) > > [PyPy 1.6.0 with MSC v.1500 32 bit] > > Running tests directly from GarlicSim repo. > > Pypy doesn't have wxPython, not loading `garlicsim_wx` tests. > > nose.config: INFO: Set working dir to C:\Documents and Settings\User\My > > Documents\Python Projects\GarlicSim > > nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$'] > > nose.plugins.cover: INFO: Coverage report will include only packages: > > ['garlicsim', 'garlicsim_lib', 'garlicsim_wx', 'test_garlicsim', > > 'test_garlicsim_lib', 'test_garlicsim_wx', 'garlicsim', 'garlicsim_lib', > > 'garlicsim_wx', 'test_garlicsim', 'test_garlicsim_lib', > 'test_garlicsim_wx', > > 'garlicsim', 'garlicsim_lib', 'garlicsim_wx', 'test_garlicsim', > > 'test_garlicsim_lib', 'test_garlicsim_wx'] > > > > > > > > > > - after the pypy process has exited, type "echo %ERRORLEVEL%" in the > > > > > same console, to print the exit code > > > of the last process. Which number is it? > > > > -1073741819 > > > > > -- > > > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolbutuseless at gmail.com Thu Oct 6 06:55:28 2011 From: coolbutuseless at gmail.com (mike c) Date: Thu, 6 Oct 2011 14:55:28 +1000 Subject: [pypy-dev] Can't pickle a numpy object? Message-ID: Hi there, I'm new to pypy and trying it out on some numerical projects and the speed of pypy+numpy is about 4x faster than cpython+numpy. Pretty impressive! However, I want to pickle some of my numpy objects and I get the error: TypeError: can't pickle numarray objects (full error is included below) I realise that the numpy implementation in pypy is currently a proof-of-concept, and so I was wondering what I would have to change to get numarray's to be pickle-able. As simple as adding a something like a __pickle__ method to numarray? Or is the problem deeper than that? Mike. >>>> import pickle, numpy >>>> a = numpy.array([1,2,3]) >>>> pickle.dumps(a) Traceback (most recent call last): File "", line 1, in File "/Users/mike/pypy-1.6/lib-python/modified-2.7/pickle.py", line 1423, in dumps Pickler(file, protocol).dump(obj) File "/Users/mike/pypy-1.6/lib-python/modified-2.7/pickle.py", line 224, in dump self.save(obj) File "/Users/mike/pypy-1.6/lib-python/modified-2.7/pickle.py", line 306, in save rv = reduce(self.proto) File "/Users/mike/pypy-1.6/lib-python/2.7/copy_reg.py", line 70, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle numarray objects -------------- next part -------------- An HTML attachment was scrubbed... URL: From kretschmar-martin at t-online.de Wed Oct 5 22:05:00 2011 From: kretschmar-martin at t-online.de (Martin Kretschmar) Date: 05 Oct 2011 20:05 GMT Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: Message-ID: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Hello, Microsofts free WinDbg (http://en.wikipedia.org/wiki/WinDbg) can also attach to running processes. Regards, Martin "Ram Rachum" schrieb: Hey guys, I upgraded to PyPy 1.6 on my 2 Windows XP 32 bit machines. It crashes on both system when running the GarlicSim test suite. It shows a Windows error dialog saying "pypy.exe has encountered a problem and needs to close. We are sorry for the inconvenience." and giving this data: AppName: pypy.exe AppVer: 0.0.0.0 ModName: kernel32.dll ModVer: 5.1.2600.5512 Offset: 00040b0d I can also open a dialog with a lot of data on the error (don't know how useful it is) but Windows won't let me Ctrl-C it. What can I do? Thanks, Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Thu Oct 6 10:13:48 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 6 Oct 2011 10:13:48 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: Hi, 2011/10/5 Martin Kretschmar : > Microsofts free WinDbg (http://en.wikipedia.org/wiki/WinDbg) can also attach > to running processes. A release build of pypy contains no debug information Believe me, a debugger won't show anything useful. It's already difficult enough in a debug build (because of the generated C code) and almost impossible when JIT code is involved, except maybe for Armin :-) A reproducible (short) test case would be really appreciated. -- Amaury Forgeot d'Arc From jezreel at gmail.com Thu Oct 6 10:23:36 2011 From: jezreel at gmail.com (Jez) Date: Thu, 6 Oct 2011 04:23:36 -0400 Subject: [pypy-dev] Contributing Message-ID: Hi all, I am a college student interested in contributing to PyPy. I am not too sure exactly what I would like to work on, but I would ultimately like to get involved with the core JIT code, and I would prefer to work on improving the generated code rather than working on tools like the jitviewer. I would be grateful if you guys could point me to some good first bugs, and / or suggest some projects that would be in line with my interests. A little more on myself: I contributed to a fairly popular browser extension called Vimium for about a year, and then I spent this summer hacking on Firefox. All that ws mostly front-end work in Javascript, though I am pretty decent at C++ as well. I'm hoping to do more low-level systems work now, and learn some interesting theory in the process. Let me know how I might participate! Cheers, Jez Ng -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Oct 6 10:42:05 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 6 Oct 2011 10:42:05 +0200 Subject: [pypy-dev] PyCon 2012 Message-ID: Hi all, The deadline for submitting talks to the US PyCon 2012 conference is already coming: October 12. Armin From fijall at gmail.com Thu Oct 6 10:54:16 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 6 Oct 2011 10:54:16 +0200 Subject: [pypy-dev] Contributing In-Reply-To: References: Message-ID: On Thu, Oct 6, 2011 at 10:23 AM, Jez wrote: > Hi all, > > I am a college student interested in contributing to PyPy. I am not too sure > exactly what I would like to work on, but I would ultimately like to get > involved with the core JIT code, and I would prefer to work on improving the > generated code rather than working on tools like the jitviewer. I would be > grateful if you guys could point me to some good first bugs, and / or > suggest some projects that would be in line with my interests. Hi, great! There is a couple of tasks related to the generated assembler that can be improved. The long-waiting large size task is improving the current register allocator. A bit less demanding one is investigating whether reordering instructions makes sense on modern machines (also not very small though). If you want to know more, we usually hang out on IRC #pypy on freenode, otherwise feel free to ask more questions here. Cheers, fijal > > A little more on myself: I contributed to a fairly popular browser extension > called Vimium for about a year, and then I spent this summer hacking on > Firefox. All that ws mostly front-end work in Javascript, though I am pretty > decent at C++ as well. I'm hoping to do more low-level systems work now, and > learn some interesting theory in the process. Let me know how I might > participate! > > Cheers, > Jez Ng > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From hakan at debian.org Thu Oct 6 11:33:25 2011 From: hakan at debian.org (Hakan Ardo) Date: Thu, 6 Oct 2011 11:33:25 +0200 Subject: [pypy-dev] Can't pickle a numpy object? In-Reply-To: References: Message-ID: Hi, I think what you need to add is a __reduce__ method. We have support for pickling array.array. Search for reduce in pypy/module/array/interp_array.py to get an idea of how it was implemented. If there are deeper issues that makes pickling more complicated for numpy arrays I dont know... On Thu, Oct 6, 2011 at 6:55 AM, mike c wrote: > Hi there, > I'm new to pypy and trying it out on some numerical projects and the speed > of pypy+numpy is about 4x faster than cpython+numpy. Pretty impressive! > However, I want to pickle some of my numpy objects and I get the > error:?TypeError: can't pickle numarray objects ?(full error is included > below) > I realise that the numpy implementation in pypy is currently a > proof-of-concept, and so I was wondering what I would have to change to get > numarray's to be pickle-able. ?As simple as adding a something like a > __pickle__ method to numarray? ?Or is the problem deeper than that? > Mike. > > >>>>> import pickle, numpy >>>>> a = numpy.array([1,2,3]) >>>>> pickle.dumps(a) > Traceback (most recent call last): > ? File "", line 1, in > ? File "/Users/mike/pypy-1.6/lib-python/modified-2.7/pickle.py", line 1423, > in dumps > ? ? Pickler(file, protocol).dump(obj) > ? File "/Users/mike/pypy-1.6/lib-python/modified-2.7/pickle.py", line 224, > in dump > ? ? self.save(obj) > ? File "/Users/mike/pypy-1.6/lib-python/modified-2.7/pickle.py", line 306, > in save > ? ? rv = reduce(self.proto) > ? File "/Users/mike/pypy-1.6/lib-python/2.7/copy_reg.py", line 70, in > _reduce_ex > ? ? raise TypeError, "can't pickle %s objects" % base.__name__ > TypeError: can't pickle numarray objects > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- H?kan Ard? From ram at rachum.com Thu Oct 6 15:13:46 2011 From: ram at rachum.com (Ram Rachum) Date: Thu, 6 Oct 2011 15:13:46 +0200 Subject: [pypy-dev] Why `pypy-c.exe`? Message-ID: Why is PyPy's executable called `pypy-c.exe` on Windows? I just renamed mine to `pypy.exe`, is that okay? -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Thu Oct 6 15:16:43 2011 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 6 Oct 2011 09:16:43 -0400 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: 2011/10/6 Ram Rachum : > Why is PyPy's executable called `pypy-c.exe` on Windows? I just renamed mine > to?`pypy.exe`, is that okay? Because it's generated by the "C" backend. -- Regards, Benjamin From ram at rachum.com Thu Oct 6 15:18:00 2011 From: ram at rachum.com (Ram Rachum) Date: Thu, 6 Oct 2011 15:18:00 +0200 Subject: [pypy-dev] Bugs in using `easy_install` in PyPy 1.6 on Windows Message-ID: Hey, I remember that I had trouble using `easy_install` when I was just setting up PyPy 1.6 on my XP box. I had no time to document it at that time, but I believe my workflow was to download and run distribute_setup.py, then do `easy_install pip`, and then `pip install nose`, and it didn't work so I resorted to downloading tarballs manually and running `setup.py install`, which worked. Can you add this to the test suite? I think it's something that needs to be tested automatically before any PyPy release on any platform. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ram at rachum.com Thu Oct 6 15:19:27 2011 From: ram at rachum.com (Ram Rachum) Date: Thu, 6 Oct 2011 15:19:27 +0200 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: On Thu, Oct 6, 2011 at 3:16 PM, Benjamin Peterson wrote: > 2011/10/6 Ram Rachum : > > Why is PyPy's executable called `pypy-c.exe` on Windows? I just renamed > mine > > to `pypy.exe`, is that okay? > > Because it's generated by the "C" backend. > > -- > Regards, > Benjamin > And is that a good enough reason to call it "pypy-c.exe" and risk confusing users? Also, I hope someone could answer my second question. Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Thu Oct 6 15:25:13 2011 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 6 Oct 2011 09:25:13 -0400 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: 2011/10/6 Ram Rachum : > > On Thu, Oct 6, 2011 at 3:16 PM, Benjamin Peterson > wrote: >> >> 2011/10/6 Ram Rachum : >> > Why is PyPy's executable called `pypy-c.exe` on Windows? I just renamed >> > mine >> > to?`pypy.exe`, is that okay? >> >> Because it's generated by the "C" backend. >> >> -- >> Regards, >> Benjamin > > And is that a good enough reason to call it "pypy-c.exe" and risk confusing > users? Is it confusing? > Also, I hope someone could answer my second question. No -- Regards, Benjamin From benjamin at python.org Thu Oct 6 15:25:29 2011 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 6 Oct 2011 09:25:29 -0400 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: 2011/10/6 Benjamin Peterson : > 2011/10/6 Ram Rachum : >> >> On Thu, Oct 6, 2011 at 3:16 PM, Benjamin Peterson >> wrote: >>> >>> 2011/10/6 Ram Rachum : >>> > Why is PyPy's executable called `pypy-c.exe` on Windows? I just renamed >>> > mine >>> > to?`pypy.exe`, is that okay? >>> >>> Because it's generated by the "C" backend. >>> >>> -- >>> Regards, >>> Benjamin >> >> And is that a good enough reason to call it "pypy-c.exe" and risk confusing >> users? > > Is it confusing? > >> Also, I hope someone could answer my second question. > > No And by that I mean it's not harmful. -- Regards, Benjamin From ram at rachum.com Thu Oct 6 15:35:49 2011 From: ram at rachum.com (Ram Rachum) Date: Thu, 6 Oct 2011 15:35:49 +0200 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: On Thu, Oct 6, 2011 at 3:25 PM, Benjamin Peterson wrote: > 2011/10/6 Ram Rachum : > > > > On Thu, Oct 6, 2011 at 3:16 PM, Benjamin Peterson > > wrote: > >> > >> 2011/10/6 Ram Rachum : > >> > Why is PyPy's executable called `pypy-c.exe` on Windows? I just > renamed > >> > mine > >> > to `pypy.exe`, is that okay? > >> > >> Because it's generated by the "C" backend. > >> > >> -- > >> Regards, > >> Benjamin > > > > And is that a good enough reason to call it "pypy-c.exe" and risk > confusing > > users? > > Is it confusing? > I personally find it confusing, yes. Please remember that what's obvious to you is not necessarily obvious to other people. Here are a few scenarios that could have been true from my point of view: - There is a `pypy.exe` file somewhere in the distribution, and it should call `pypy-c.exe` internally. - PyPy should always be run as `pypy-c` from Windows. (Why? Who knows.) - I have made some kind of error in my installation and the `pypy.exe` file is missing. So unless there is a good enough reason to call it `pypy-c.exe`, I suggest future releases will be made with an executable of `pypy.exe`. > > Also, I hope someone could answer my second question. > > No (And by that I mean it's not harmful.) Thanks. > > > > -- > Regards, > Benjamin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Thu Oct 6 15:37:24 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 6 Oct 2011 15:37:24 +0200 Subject: [pypy-dev] Bugs in using `easy_install` in PyPy 1.6 on Windows In-Reply-To: References: Message-ID: 2011/10/6 Ram Rachum : > I remember that I had trouble using `easy_install` when I was just setting > up PyPy 1.6 on my XP box. > I had no time to document it at that time, but I believe my workflow was to > download and run distribute_setup.py, then do `easy_install pip`, and then > `pip install nose`, and it didn't work so I resorted to downloading tarballs > manually and running `setup.py install`, which worked. > Can you add this to the test suite? I think it's something that needs to be > tested automatically before any PyPy release on any platform. I don't think there are tests for an "installed" interpreter at the moment. Do you remember the issues you had at the time? -- Amaury Forgeot d'Arc From ram at rachum.com Thu Oct 6 16:09:01 2011 From: ram at rachum.com (Ram Rachum) Date: Thu, 6 Oct 2011 16:09:01 +0200 Subject: [pypy-dev] Bugs in using `easy_install` in PyPy 1.6 on Windows In-Reply-To: References: Message-ID: On Thu, Oct 6, 2011 at 3:37 PM, Amaury Forgeot d'Arc wrote: 2011/10/6 Ram Rachum : > > I remember that I had trouble using `easy_install` when I was just > setting > > up PyPy 1.6 on my XP box. > > > I had no time to document it at that time, but I believe my workflow was > to > > download and run distribute_setup.py, then do `easy_install pip`, and > then > > `pip install nose`, and it didn't work so I resorted to downloading > tarballs > > manually and running `setup.py install`, which worked. > > > Can you add this to the test suite? I think it's something that needs to > be > > tested automatically before any PyPy release on any platform. > > I don't think there are tests for an "installed" interpreter at the moment. > Do you remember the issues you had at the time? > > -- > Amaury Forgeot d'Arc > I reproduced it now on a VM: 1. Install Distribute using `distribute_setup.py`. 2. `easy_install pip`, the `pip` script does get installed, but not properly: Searching for pip Reading http://pypi.python.org/simple/pip/ Reading http://www.pip-installer.org Reading http://pip.openplans.org Best match: pip 1.0.2 Downloading http://pypi.python.org/packages/source/p/pip/pip-1.0.2.tar.gz#md5=47 ec6ff3f6d962696fe08d4c8264ad49 Processing pip-1.0.2.tar.gz Running pip-1.0.2\setup.py -q bdist_egg --dist-dir c:\docume~1\admini~1\locals~1 \temp\easy_install-5ffmhm\pip-1.0.2\egg-dist-tmp-w739em warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.txt' found under directory 'do cs\_build' no previously-included directories found matching 'docs\_build\_sources' No eggs found in c:\docume~1\admini~1\locals~1\temp\easy_install-5ffmhm\pip-1.0. 2\egg-dist-tmp-w739em (setup script problem?) 3. `pip install nose`: You get an error: Traceback (most recent call last): File "app_main.py", line 53, in run_toplevel File "c:\Documents and Settings\Administrator\Desktop\pypy-1.6\bin\pip-script. py", line 5, in from pkg_resources import load_entry_point File "c:\Documents and Settings\Administrator\Desktop\pypy-1.6\site-packages\d istribute-0.6.21-py2.7.egg\pkg_resources.py", line 2709, in working_set.require(__requires__) File "c:\Documents and Settings\Administrator\Desktop\pypy-1.6\site-packages\d istribute-0.6.21-py2.7.egg\pkg_resources.py", line 686, in require needed = self.resolve(parse_requirements(requirements)) File "c:\Documents and Settings\Administrator\Desktop\pypy-1.6\site-packages\d istribute-0.6.21-py2.7.egg\pkg_resources.py", line 584, in resolve raise DistributionNotFound(req) DistributionNotFound: pip==1.0.2 Any clue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at gmail.com Thu Oct 6 16:40:34 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Thu, 6 Oct 2011 15:40:34 +0100 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: On 6 October 2011 14:25, Benjamin Peterson wrote: > 2011/10/6 Ram Rachum : > > > > On Thu, Oct 6, 2011 at 3:16 PM, Benjamin Peterson > > wrote: > >> > >> 2011/10/6 Ram Rachum : > >> > Why is PyPy's executable called `pypy-c.exe` on Windows? I just > renamed > >> > mine > >> > to `pypy.exe`, is that okay? > >> > >> Because it's generated by the "C" backend. > >> > >> -- > >> Regards, > >> Benjamin > > > > And is that a good enough reason to call it "pypy-c.exe" and risk > confusing > > users? > > Is it confusing? > > Yes Michael > > Also, I hope someone could answer my second question. > > No > > > -- > Regards, > Benjamin > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Oct 7 08:35:38 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 7 Oct 2011 08:35:38 +0200 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: Hi, On Thu, Oct 6, 2011 at 16:40, Michael Foord wrote: >> > And is that a good enough reason to call it "pypy-c.exe" and risk >> > confusing users? >> >> Is it confusing? > > Yes Ah, if you say so. Fixed. Armin From ram at rachum.com Fri Oct 7 11:03:08 2011 From: ram at rachum.com (Ram Rachum) Date: Fri, 7 Oct 2011 11:03:08 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: On Thu, Oct 6, 2011 at 10:13 AM, Amaury Forgeot d'Arc wrote: > Hi, > > 2011/10/5 Martin Kretschmar : > > Microsofts free WinDbg (http://en.wikipedia.org/wiki/WinDbg) can also > attach > > to running processes. > > A release build of pypy contains no debug information > Believe me, a debugger won't show anything useful. > > It's already difficult enough in a debug build (because of the generated C > code) > and almost impossible when JIT code is involved, except maybe for Armin :-) > > A reproducible (short) test case would be really appreciated. > > -- > Amaury Forgeot d'Arc > Hey guys, I've managed to produce a VM that shows the bug. You can download it here: http://dl.dropbox.com/u/1927707/VM%20for%20debugging%20PyPy.rar It's a VMWare Virtual Machine, and it weighs 2 GB compressed. Once you fire up the VM, there are short instructions in a text file on the desktop to making PyPy crash. Will anyone have time to try that? Thanks, Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lavrenov at gmail.com Fri Oct 7 12:24:39 2011 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Fri, 7 Oct 2011 14:24:39 +0400 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: Hello all I've written small program that show the problem. http://paste.pocoo.org/show/488730/ You can test it with this line ab -n 1000 http://localhost:8032/ After 810 request i am starting to get errors http://paste.pocoo.org/show/488732/ if i start it with --jit off it work fine Yes, please try to find a minimal example that shows the issue. > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor.katson at gmail.com Fri Oct 7 13:33:27 2011 From: igor.katson at gmail.com (Igor Katson) Date: Fri, 07 Oct 2011 15:33:27 +0400 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app Message-ID: <4E8EE387.2070806@gmail.com> When I first started benchmarking one of my Django sites, http://trip-travel.ru/ (using postgres driver pypq), I was disappointed by the results. PyPy was visually much slower than cPython (I just looked at how the site loads in the browser) But today I got some incredible results, I finally made PyPy to work faster than cPython, and found out that it got faster after loading the page several hundred times with "ab" or "siege" Here a the results, of mean response time by querying the home page with apache's "ab" (cPython 2.7.2, Django 1.3, PyPy 1.6.0), served with cherrypy wsgi server: After 10 requests (excluding the first request): cPython - 163.529 ms PyPy - 460.879 ms 50 request more: cPython - 168.539 PyPy - 249.850 100 requests more: cPython - 166.278 ms PyPy - 131.104 100 requests more: cPython - 165.820 PyPy - 115.446 300 requests more: cPython - 165.543 PyPy - 107.636 300 requests more: cPython - 166.425 PyPy - 103.065 As we can see, the JIT needs much time to warm up, but when it does, the result is pretty noticeable. By the way, with psycopg2, the site responds for 155.766 ms in average (only 10 ms faster), so using PyPy with Django makes much sense for me. As for now, pypy cannot run with uWSGI, which I use in production, but maybe i'll switch to PyPy for production deployments if "PyPy + PyPQ + Some pure python WSGI server" suite will outperform (uWSGI + cPython + psycopg2). Though, the need to load the page 500 times after each server reload is not comfortable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at gmail.com Fri Oct 7 15:12:16 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Fri, 7 Oct 2011 14:12:16 +0100 Subject: [pypy-dev] Why `pypy-c.exe`? In-Reply-To: References: Message-ID: On 7 October 2011 07:35, Armin Rigo wrote: > Hi, > > On Thu, Oct 6, 2011 at 16:40, Michael Foord wrote: > >> > And is that a good enough reason to call it "pypy-c.exe" and risk > >> > confusing users? > >> > >> Is it confusing? > > > > Yes > > Ah, if you say so. Fixed. > > Ever the man of action. Thanks Armin. Michael > > Armin > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Oct 7 18:50:15 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 7 Oct 2011 18:50:15 +0200 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: <4E8EE387.2070806@gmail.com> References: <4E8EE387.2070806@gmail.com> Message-ID: On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson wrote: > When I first started benchmarking one of my Django sites, > http://trip-travel.ru/ (using postgres driver pypq), > I was disappointed by the results. PyPy was visually much slower than > cPython (I just looked at how the site loads in the browser) > > But today I got some incredible results, I finally made PyPy to work faster > than cPython, and found out that it got faster after loading the page > several hundred times with "ab" or "siege" > > Here a the results, of mean response time by querying the home page with > apache's "ab" (cPython 2.7.2, Django 1.3, PyPy 1.6.0), served with cherrypy > wsgi server: > > After 10 requests (excluding the first request): > cPython - 163.529 ms > PyPy - 460.879 ms > > 50 request more: > cPython - 168.539 > PyPy - 249.850 > > 100 requests more: > cPython - 166.278 ms > PyPy - 131.104 > > 100 requests more: > cPython - 165.820 > PyPy - 115.446 > > 300 requests more: > cPython - 165.543 > PyPy - 107.636 > > 300 requests more: > cPython - 166.425 > PyPy - 103.065 Thanks for doing the benchmarks :) > > As we can see, the JIT needs much time to warm up, but when it does, the > result is pretty noticeable. > By the way, with psycopg2, the site responds for 155.766 ms in average (only > 10 ms faster), so using PyPy with Django makes much sense for me. > > As for now, pypy cannot run with uWSGI, which I use in production, but maybe > i'll switch to PyPy for production deployments if "PyPy + PyPQ + Some pure > python WSGI server" suite will outperform (uWSGI + cPython + psycopg2). > Though, the need to load the page 500 times after each server reload is not > comfortable. I've heard people using gunicorn. Maybe this is a good try? Loading the pages is indeed annoying, but you need to load it a couple times for results not to be noticably slower :) We kind of know that the JIT warmup time is high, but it's partly a thing to fix and partly an inherent property of the JIT. Cheers, fijal From fuzzyman at gmail.com Fri Oct 7 19:04:17 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Fri, 7 Oct 2011 18:04:17 +0100 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: On 7 October 2011 17:50, Maciej Fijalkowski wrote: > On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson wrote: > > When I first started benchmarking one of my Django sites, > > http://trip-travel.ru/ (using postgres driver pypq), > > I was disappointed by the results. PyPy was visually much slower than > > cPython (I just looked at how the site loads in the browser) > > > > But today I got some incredible results, I finally made PyPy to work > faster > > than cPython, and found out that it got faster after loading the page > > several hundred times with "ab" or "siege" > > > > Here a the results, of mean response time by querying the home page with > > apache's "ab" (cPython 2.7.2, Django 1.3, PyPy 1.6.0), served with > cherrypy > > wsgi server: > > > > After 10 requests (excluding the first request): > > cPython - 163.529 ms > > PyPy - 460.879 ms > > > > 50 request more: > > cPython - 168.539 > > PyPy - 249.850 > > > > 100 requests more: > > cPython - 166.278 ms > > PyPy - 131.104 > > > > 100 requests more: > > cPython - 165.820 > > PyPy - 115.446 > > > > 300 requests more: > > cPython - 165.543 > > PyPy - 107.636 > > > > 300 requests more: > > cPython - 166.425 > > PyPy - 103.065 > > Thanks for doing the benchmarks :) > > > > > As we can see, the JIT needs much time to warm up, but when it does, the > > result is pretty noticeable. > > By the way, with psycopg2, the site responds for 155.766 ms in average > (only > > 10 ms faster), so using PyPy with Django makes much sense for me. > > > > As for now, pypy cannot run with uWSGI, which I use in production, but > maybe > > i'll switch to PyPy for production deployments if "PyPy + PyPQ + Some > pure > > python WSGI server" suite will outperform (uWSGI + cPython + psycopg2). > > Though, the need to load the page 500 times after each server reload is > not > > comfortable. > > I've heard people using gunicorn. Maybe this is a good try? Loading > the pages is indeed annoying, but you need to load it a couple times > for results not to be noticably slower :) We kind of know that the JIT > warmup time is high, but it's partly a thing to fix and partly an > inherent property of the JIT. > > FWIW I shared this internally at Canonical, and whilst people were impressed there was some concern that having substantially worse performance for the first few hundred requests would a) be a showstopper and b) screw up metrics. The technique of deliberate warming immediately after restart is interesting, but it would be hard to hit all the code paths. I realise this is inherent in the way the jit works - but I thought it was a response worth sharing. I also assured them that pre-warm-up performance would continue to improve as pypy improves. ;-) All the best, Michael Foord > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Oct 7 19:08:45 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 7 Oct 2011 19:08:45 +0200 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: On Fri, Oct 7, 2011 at 7:04 PM, Michael Foord wrote: > > > On 7 October 2011 17:50, Maciej Fijalkowski wrote: >> >> On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson wrote: >> > When I first started benchmarking one of my Django sites, >> > http://trip-travel.ru/ (using postgres driver pypq), >> > I was disappointed by the results. PyPy was visually much slower than >> > cPython (I just looked at how the site loads in the browser) >> > >> > But today I got some incredible results, I finally made PyPy to work >> > faster >> > than cPython, and found out that it got faster after loading the page >> > several hundred times with "ab" or "siege" >> > >> > Here a the results, of mean response time by querying the home page with >> > apache's "ab" (cPython 2.7.2, Django 1.3, PyPy 1.6.0), served with >> > cherrypy >> > wsgi server: >> > >> > After 10 requests (excluding the first request): >> > cPython - 163.529 ms >> > PyPy - 460.879 ms >> > >> > 50 request more: >> > cPython - 168.539 >> > PyPy - 249.850 >> > >> > 100 requests more: >> > cPython - 166.278 ms >> > PyPy - 131.104 >> > >> > 100 requests more: >> > cPython - 165.820 >> > PyPy - 115.446 >> > >> > 300 requests more: >> > cPython - 165.543 >> > PyPy - 107.636 >> > >> > 300 requests more: >> > cPython - 166.425 >> > PyPy - 103.065 >> >> Thanks for doing the benchmarks :) >> >> > >> > As we can see, the JIT needs much time to warm up, but when it does, the >> > result is pretty noticeable. >> > By the way, with psycopg2, the site responds for 155.766 ms in average >> > (only >> > 10 ms faster), so using PyPy with Django makes much sense for me. >> > >> > As for now, pypy cannot run with uWSGI, which I use in production, but >> > maybe >> > i'll switch to PyPy for production deployments if "PyPy + PyPQ + Some >> > pure >> > python WSGI server" suite will outperform (uWSGI + cPython + psycopg2). >> > Though, the need to load the page 500 times after each server reload is >> > not >> > comfortable. >> >> I've heard people using gunicorn. Maybe this is a good try? Loading >> the pages is indeed annoying, but you need to load it a couple times >> for results not to be noticably slower :) We kind of know that the JIT >> warmup time is high, but it's partly a thing to fix and partly an >> inherent property of the JIT. >> > > > FWIW I shared this internally at Canonical, and whilst people were impressed > there was some concern that having substantially worse performance for the > first few hundred requests would a) be a showstopper and b) screw up > metrics. The technique of deliberate warming immediately after restart is > interesting, but it would be hard to hit all the code paths. > > I realise this is inherent in the way the jit works - but I thought it was a > response worth sharing. I also assured them that pre-warm-up performance > would continue to improve as pypy improves. ;-) > > All the best, > > Michael Foord > It's also true for anything based on JVM and I've *never* seen anyone complain about it. The whole concept of a JIT is just "new" to python world. Seriously, would actually anyone notice if first 50 requests after restart take extra 200ms??? I'm not sure, my computer and my internet connection both have hiccups of over 200ms. Note that many code paths are common, which means that speed up is also shared. From fuzzyman at gmail.com Fri Oct 7 19:15:22 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Fri, 7 Oct 2011 18:15:22 +0100 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: On 7 October 2011 18:08, Maciej Fijalkowski wrote: > On Fri, Oct 7, 2011 at 7:04 PM, Michael Foord wrote: > > > > > > On 7 October 2011 17:50, Maciej Fijalkowski wrote: > >> > >> On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson > wrote: > >> > When I first started benchmarking one of my Django sites, > >> > http://trip-travel.ru/ (using postgres driver pypq), > >> > I was disappointed by the results. PyPy was visually much slower than > >> > cPython (I just looked at how the site loads in the browser) > >> > > >> > But today I got some incredible results, I finally made PyPy to work > >> > faster > >> > than cPython, and found out that it got faster after loading the page > >> > several hundred times with "ab" or "siege" > >> > > >> > Here a the results, of mean response time by querying the home page > with > >> > apache's "ab" (cPython 2.7.2, Django 1.3, PyPy 1.6.0), served with > >> > cherrypy > >> > wsgi server: > >> > > >> > After 10 requests (excluding the first request): > >> > cPython - 163.529 ms > >> > PyPy - 460.879 ms > >> > > >> > 50 request more: > >> > cPython - 168.539 > >> > PyPy - 249.850 > >> > > >> > 100 requests more: > >> > cPython - 166.278 ms > >> > PyPy - 131.104 > >> > > >> > 100 requests more: > >> > cPython - 165.820 > >> > PyPy - 115.446 > >> > > >> > 300 requests more: > >> > cPython - 165.543 > >> > PyPy - 107.636 > >> > > >> > 300 requests more: > >> > cPython - 166.425 > >> > PyPy - 103.065 > >> > >> Thanks for doing the benchmarks :) > >> > >> > > >> > As we can see, the JIT needs much time to warm up, but when it does, > the > >> > result is pretty noticeable. > >> > By the way, with psycopg2, the site responds for 155.766 ms in average > >> > (only > >> > 10 ms faster), so using PyPy with Django makes much sense for me. > >> > > >> > As for now, pypy cannot run with uWSGI, which I use in production, but > >> > maybe > >> > i'll switch to PyPy for production deployments if "PyPy + PyPQ + Some > >> > pure > >> > python WSGI server" suite will outperform (uWSGI + cPython + > psycopg2). > >> > Though, the need to load the page 500 times after each server reload > is > >> > not > >> > comfortable. > >> > >> I've heard people using gunicorn. Maybe this is a good try? Loading > >> the pages is indeed annoying, but you need to load it a couple times > >> for results not to be noticably slower :) We kind of know that the JIT > >> warmup time is high, but it's partly a thing to fix and partly an > >> inherent property of the JIT. > >> > > > > > > FWIW I shared this internally at Canonical, and whilst people were > impressed > > there was some concern that having substantially worse performance for > the > > first few hundred requests would a) be a showstopper and b) screw up > > metrics. The technique of deliberate warming immediately after restart is > > interesting, but it would be hard to hit all the code paths. > > > > I realise this is inherent in the way the jit works - but I thought it > was a > > response worth sharing. I also assured them that pre-warm-up performance > > would continue to improve as pypy improves. ;-) > > > > All the best, > > > > Michael Foord > > > > It's also true for anything based on JVM and I've *never* seen anyone > complain about it. The whole concept of a JIT is just "new" to python > world. Seriously, would actually anyone notice if first 50 requests > after restart take extra 200ms??? I'm not sure, my computer and my > internet connection both have hiccups of over 200ms. Note that many > code paths are common, which means that speed up is also shared. > I think you have a point and I've added your response to the internal discussion. Michael -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From santagada at gmail.com Fri Oct 7 19:21:52 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Fri, 7 Oct 2011 14:21:52 -0300 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: No one complains about it, but Solr does do a warmup phase while bringing the server up before starting to serve real requests. On Fri, Oct 7, 2011 at 2:15 PM, Michael Foord wrote: > > > On 7 October 2011 18:08, Maciej Fijalkowski wrote: >> >> On Fri, Oct 7, 2011 at 7:04 PM, Michael Foord wrote: >> > >> > >> > On 7 October 2011 17:50, Maciej Fijalkowski wrote: >> >> >> >> On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson >> >> wrote: >> >> > When I first started benchmarking one of my Django sites, >> >> > http://trip-travel.ru/ (using postgres driver pypq), >> >> > I was disappointed by the results. PyPy was visually much slower than >> >> > cPython (I just looked at how the site loads in the browser) >> >> > >> >> > But today I got some incredible results, I finally made PyPy to work >> >> > faster >> >> > than cPython, and found out that it got faster after loading the page >> >> > several hundred times with "ab" or "siege" >> >> > >> >> > Here a the results, of mean response time by querying the home page >> >> > with >> >> > apache's "ab" (cPython 2.7.2, Django 1.3, PyPy 1.6.0), served with >> >> > cherrypy >> >> > wsgi server: >> >> > >> >> > After 10 requests (excluding the first request): >> >> > cPython - 163.529 ms >> >> > PyPy - 460.879 ms >> >> > >> >> > 50 request more: >> >> > cPython - 168.539 >> >> > PyPy - 249.850 >> >> > >> >> > 100 requests more: >> >> > cPython - 166.278 ms >> >> > PyPy - 131.104 >> >> > >> >> > 100 requests more: >> >> > cPython - 165.820 >> >> > PyPy - 115.446 >> >> > >> >> > 300 requests more: >> >> > cPython - 165.543 >> >> > PyPy - 107.636 >> >> > >> >> > 300 requests more: >> >> > cPython - 166.425 >> >> > PyPy - 103.065 >> >> >> >> Thanks for doing the benchmarks :) >> >> >> >> > >> >> > As we can see, the JIT needs much time to warm up, but when it does, >> >> > the >> >> > result is pretty noticeable. >> >> > By the way, with psycopg2, the site responds for 155.766 ms in >> >> > average >> >> > (only >> >> > 10 ms faster), so using PyPy with Django makes much sense for me. >> >> > >> >> > As for now, pypy cannot run with uWSGI, which I use in production, >> >> > but >> >> > maybe >> >> > i'll switch to PyPy for production deployments if "PyPy + PyPQ + Some >> >> > pure >> >> > python WSGI server" suite will outperform (uWSGI + cPython + >> >> > psycopg2). >> >> > Though, the need to load the page 500 times after each server reload >> >> > is >> >> > not >> >> > comfortable. >> >> >> >> I've heard people using gunicorn. Maybe this is a good try? Loading >> >> the pages is indeed annoying, but you need to load it a couple times >> >> for results not to be noticably slower :) We kind of know that the JIT >> >> warmup time is high, but it's partly a thing to fix and partly an >> >> inherent property of the JIT. >> >> >> > >> > >> > FWIW I shared this internally at Canonical, and whilst people were >> > impressed >> > there was some concern that having substantially worse performance for >> > the >> > first few hundred requests would a) be a showstopper and b) screw up >> > metrics. The technique of deliberate warming immediately after restart >> > is >> > interesting, but it would be hard to hit all the code paths. >> > >> > I realise this is inherent in the way the jit works - but I thought it >> > was a >> > response worth sharing. I also assured them that pre-warm-up performance >> > would continue to improve as pypy improves. ;-) >> > >> > All the best, >> > >> > Michael Foord >> > >> >> It's also true for anything based on JVM and I've *never* seen anyone >> complain about it. The whole concept of a JIT is just "new" to python >> world. Seriously, would actually anyone notice if first 50 requests >> after restart take extra 200ms??? I'm not sure, my computer and my >> internet connection both have hiccups of over 200ms. Note that many >> code paths are common, which means that speed up is also shared. > > I think you have a point and I've added your response to the internal > discussion. > > Michael > > -- > > http://www.voidspace.org.uk/ > > > May you do good and not evil > May you find forgiveness for yourself and forgive others > May you share freely, never taking more than you give. > -- the sqlite blessing http://www.sqlite.org/different.html > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- Leonardo Santagada From fijall at gmail.com Fri Oct 7 19:23:00 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 7 Oct 2011 19:23:00 +0200 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: On Fri, Oct 7, 2011 at 7:19 PM, Ian P. Cooke wrote: > > On Oct 07, 2011, at 11:50 , Maciej Fijalkowski wrote: > > On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson wrote: > > [...] > > Though, the need to load the page 500 times after each server reload is not > > comfortable. > > [...] > I've heard people using gunicorn. Maybe this is a good try? Loading > the pages is indeed annoying, but you need to load it a couple times > for results not to be noticably slower :) We kind of know that the JIT > warmup time is high, but it's partly a thing to fix and partly an > inherent property of the JIT. > > > Would adjusting the JIT's threshold and?function_threshold settings help > here? Not really, since you'll compile more stuff than you want From fijall at gmail.com Fri Oct 7 20:02:06 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 7 Oct 2011 20:02:06 +0200 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: On Fri, Oct 7, 2011 at 12:24 PM, Max Lavrenov wrote: > Hello all > > I've written small program that show the problem. > http://paste.pocoo.org/show/488730/ > > You can test it with this line? ab -n 1000 http://localhost:8032/ > > After 810 request i am starting to get errors > http://paste.pocoo.org/show/488732/ > > if i start it with --jit off it work fine > > >> Yes, please try to find a minimal example that shows the issue. >> >> -- >> Amaury Forgeot d'Arc > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > This is fixed on trunk. Can you try nightly? We should release 1.7 some time soon. From igor.katson at gmail.com Fri Oct 7 20:07:53 2011 From: igor.katson at gmail.com (Igor Katson) Date: Fri, 07 Oct 2011 22:07:53 +0400 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: <4E8F3FF9.3070907@gmail.com> On 10/07/2011 08:50 PM, Maciej Fijalkowski wrote: > On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson wrote: >> When I first started benchmarking one of my Django sites, >> http://trip-travel.ru/ (using postgres driver pypq), >> I was disappointed by the results. PyPy was visually much slower than >> cPython (I just looked at how the site loads in the browser) >> >> But today I got some incredible results, I finally made PyPy to work faster >> than cPython, and found out that it got faster after loading the page >> several hundred times with "ab" or "siege" >> >> Here a the results, of mean response time by querying the home page with >> apache's "ab" (cPython 2.7.2, Django 1.3, PyPy 1.6.0), served with cherrypy >> wsgi server: >> >> After 10 requests (excluding the first request): >> cPython - 163.529 ms >> PyPy - 460.879 ms >> >> 50 request more: >> cPython - 168.539 >> PyPy - 249.850 >> >> 100 requests more: >> cPython - 166.278 ms >> PyPy - 131.104 >> >> 100 requests more: >> cPython - 165.820 >> PyPy - 115.446 >> >> 300 requests more: >> cPython - 165.543 >> PyPy - 107.636 >> >> 300 requests more: >> cPython - 166.425 >> PyPy - 103.065 > Thanks for doing the benchmarks :) > >> As we can see, the JIT needs much time to warm up, but when it does, the >> result is pretty noticeable. >> By the way, with psycopg2, the site responds for 155.766 ms in average (only >> 10 ms faster), so using PyPy with Django makes much sense for me. >> >> As for now, pypy cannot run with uWSGI, which I use in production, but maybe >> i'll switch to PyPy for production deployments if "PyPy + PyPQ + Some pure >> python WSGI server" suite will outperform (uWSGI + cPython + psycopg2). >> Though, the need to load the page 500 times after each server reload is not >> comfortable. > I've heard people using gunicorn. Maybe this is a good try? Loading > the pages is indeed annoying, but you need to load it a couple times > for results not to be noticably slower :) We kind of know that the JIT > warmup time is high, but it's partly a thing to fix and partly an > inherent property of the JIT. > > Tried gunicorn, nothing special, the speed is roughly the same. Unfortunately, I noticed that a single instance takes way to much memory to bring that to production, where I pay for the actually used memory. 4 uWSGI workers eat 14 megabytes each, and pypy's memory usage increases, and after a couple thousands or requests a single worker took 250mb, more than 15 times more. From jlsandell at gmail.com Fri Oct 7 20:13:24 2011 From: jlsandell at gmail.com (Jeremy Sandell) Date: Fri, 7 Oct 2011 14:13:24 -0400 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: On Fri, Oct 7, 2011 at 1:21 PM, Leonardo Santagada wrote: > No one complains about it, but Solr does do a warmup phase while > bringing the server up before starting to serve real requests. FWIW, this is something I often do with Flask and Werkzeug (on PyPy and gunicorn) after restarting - pass a tuple of rules (think named urlpatterns in Django) to a function which a) looks them up in the Map, and b) hits them a few times to "get the system going". There wasn't really much science behind my setting this up, only that I didn't like how long the first request took. (: Best regards, Jeremy Sandell -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Oct 7 20:17:01 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 7 Oct 2011 20:17:01 +0200 Subject: [pypy-dev] Success histories needed Message-ID: Hi We're gathering success stories for the website. Anyone feel like providing name, company and some data how pypy worked for them (we accept info how it didn't work on bugs.pypy.org all the time :) Cheers, fijal From angelflow at yahoo.com Sat Oct 8 00:48:11 2011 From: angelflow at yahoo.com (Andy) Date: Fri, 7 Oct 2011 15:48:11 -0700 (PDT) Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: <4E8F3FF9.3070907@gmail.com> References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> Message-ID: <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> 15 times more memory? That's a lot. Interestingly Quora reported that their PyPy processes were only 50% larger than CPython ones: http://www.quora.com/Quora-Infrastructure/Did-Quoras-switch-to-PyPy-result-in-increased-memory-consumption "our PyPy worker processes themselves take approximately 50% more memory than our equivalent CPython worker processes, although we did not do a large amount of tuning of the GC. Regardless, this wasn't the main cause of our memory blowup. "In our development, we found that certain functions were not worth being ported from their C libraries to pure Python, things like?crypto,?lxml,?PyML, and a couple other random libraries. Our solution for those functions was to run a parallel CPython process that would do nothing but take arguments via an?execnetchannel, and output return values via the same?execnet?channel. "The overhead for some of these Python processes, especially for the ones that required a lot of state (for example,?PyML) is comparable to the amount of memory taken by the master PyPy process, effectively causing a 2-3x blowup in memory just to maintain the CPython processes; this is our main memory sink for our PyPy branch." ---- I wonder what accounts for this large difference in PyPy memory consumption (50% more vs. 1,400% more). What type of "large amount of tuning of the GC" did Quora do? ________________________________ From: Igor Katson To: Maciej Fijalkowski Cc: pypy-dev at python.org Sent: Friday, October 7, 2011 2:07 PM Subject: Re: [pypy-dev] Benchmarking PyPy performance on real-world Django app Tried gunicorn, nothing special, the speed is roughly the same. Unfortunately, I noticed that a single instance takes way to much memory to bring that to production, where I pay for the actually used memory. 4 uWSGI workers eat 14 megabytes each, and pypy's memory usage increases, and after a couple thousands or requests a single worker took 250mb, more than 15 times more. _______________________________________________ pypy-dev mailing list pypy-dev at python.org http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Oct 8 00:50:47 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 8 Oct 2011 00:50:47 +0200 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> Message-ID: On Sat, Oct 8, 2011 at 12:48 AM, Andy wrote: > 15 times more memory? That's a lot. > Interestingly Quora reported that their PyPy processes were only 50% larger > than CPython ones: > http://www.quora.com/Quora-Infrastructure/Did-Quoras-switch-to-PyPy-result-in-increased-memory-consumption > > "our PyPy worker processes themselves take approximately 50% more memory > than our equivalent CPython worker processes, although we did not do a large > amount of tuning of the GC. Regardless, this wasn't the main cause of our > memory blowup. > "In our development, we found that certain functions were not worth being > ported from their C libraries to pure Python, things like > > crypto > > , > > lxml > > , > > PyML > > , and a couple other random libraries. Our solution for those functions was > to run a parallel CPython process that would do nothing but take arguments > via an > > execnet > > channel, and output return values via the same > > execnet > > ?channel. > > "The overhead for some of these Python processes, especially for the ones > that required a lot of state (for example, > > PyML > > ) is comparable to the amount of memory taken by the master PyPy process, > effectively causing a 2-3x blowup in memory just to maintain the CPython > processes; this is our main memory sink for our PyPy branch." > ---- > I wonder what accounts for this large difference in PyPy memory consumption > (50% more vs. 1,400% more). What type of "large amount of tuning of the GC" > did Quora do? I think this is a bug, but also different stack was used right? Indeed, pypy should not use much more than 2x of CPython usage, I would like to give it a go if you can come up with a small reproducible example. Cheers, fijal From igor.katson at gmail.com Sat Oct 8 01:28:05 2011 From: igor.katson at gmail.com (Igor Katson) Date: Sat, 08 Oct 2011 03:28:05 +0400 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> Message-ID: <4E8F8B05.60005@gmail.com> On 10/08/2011 02:50 AM, Maciej Fijalkowski wrote: > On Sat, Oct 8, 2011 at 12:48 AM, Andy wrote: >> 15 times more memory? That's a lot. >> Interestingly Quora reported that their PyPy processes were only 50% larger >> than CPython ones: >> http://www.quora.com/Quora-Infrastructure/Did-Quoras-switch-to-PyPy-result-in-increased-memory-consumption >> >> "our PyPy worker processes themselves take approximately 50% more memory >> than our equivalent CPython worker processes, although we did not do a large >> amount of tuning of the GC. Regardless, this wasn't the main cause of our >> memory blowup. >> "In our development, we found that certain functions were not worth being >> ported from their C libraries to pure Python, things like >> >> crypto >> >> , >> >> lxml >> >> , >> >> PyML >> >> , and a couple other random libraries. Our solution for those functions was >> to run a parallel CPython process that would do nothing but take arguments >> via an >> >> execnet >> >> channel, and output return values via the same >> >> execnet >> >> channel. >> >> "The overhead for some of these Python processes, especially for the ones >> that required a lot of state (for example, >> >> PyML >> >> ) is comparable to the amount of memory taken by the master PyPy process, >> effectively causing a 2-3x blowup in memory just to maintain the CPython >> processes; this is our main memory sink for our PyPy branch." >> ---- >> I wonder what accounts for this large difference in PyPy memory consumption >> (50% more vs. 1,400% more). What type of "large amount of tuning of the GC" >> did Quora do? > I think this is a bug, but also different stack was used right? > Indeed, pypy should not use much more than 2x of CPython usage, I > would like to give it a go if you can come up with a small > reproducible example. > > Cheers, > fijal yeah, I will send you the test suite in a while. This is a bit another setup: same site with no data and sqlite instead of pypq, but it's clear that the memory usage is also huge, though far more requests are needed to bump memory usage to 200mb. cPython memory usage is constant. From fijall at gmail.com Sat Oct 8 01:35:15 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 8 Oct 2011 01:35:15 +0200 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: <4E8F8B05.60005@gmail.com> References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> <4E8F8B05.60005@gmail.com> Message-ID: On Sat, Oct 8, 2011 at 1:28 AM, Igor Katson wrote: > On 10/08/2011 02:50 AM, Maciej Fijalkowski wrote: >> >> On Sat, Oct 8, 2011 at 12:48 AM, Andy ?wrote: >>> >>> 15 times more memory? That's a lot. >>> Interestingly Quora reported that their PyPy processes were only 50% >>> larger >>> than CPython ones: >>> >>> http://www.quora.com/Quora-Infrastructure/Did-Quoras-switch-to-PyPy-result-in-increased-memory-consumption >>> >>> "our PyPy worker processes themselves take approximately 50% more memory >>> than our equivalent CPython worker processes, although we did not do a >>> large >>> amount of tuning of the GC. Regardless, this wasn't the main cause of our >>> memory blowup. >>> "In our development, we found that certain functions were not worth being >>> ported from their C libraries to pure Python, things like >>> >>> crypto >>> >>> , >>> >>> lxml >>> >>> , >>> >>> PyML >>> >>> , and a couple other random libraries. Our solution for those functions >>> was >>> to run a parallel CPython process that would do nothing but take >>> arguments >>> via an >>> >>> execnet >>> >>> channel, and output return values via the same >>> >>> execnet >>> >>> ?channel. >>> >>> "The overhead for some of these Python processes, especially for the ones >>> that required a lot of state (for example, >>> >>> PyML >>> >>> ) is comparable to the amount of memory taken by the master PyPy process, >>> effectively causing a 2-3x blowup in memory just to maintain the CPython >>> processes; this is our main memory sink for our PyPy branch." >>> ---- >>> I wonder what accounts for this large difference in PyPy memory >>> consumption >>> (50% more vs. 1,400% more). What type of "large amount of tuning of the >>> GC" >>> did Quora do? >> >> I think this is a bug, but also different stack was used right? >> Indeed, pypy should not use much more than 2x of CPython usage, I >> would like to give it a go if you can come up with a small >> reproducible example. >> >> Cheers, >> fijal > > yeah, I will send you the test suite in a while. This is a bit another > setup: same site with no data and sqlite instead of pypq, but it's clear > that the memory usage is also huge, though far more requests are needed to > bump memory usage to 200mb. cPython memory usage is constant. > It *might* be the same thing as with tornado where memory usage grows constantly. Justin peel is working on it and it'll be in 1.7 some time soon (it does not have to though, but it does sound remarkably similar) From angelflow at yahoo.com Sat Oct 8 01:56:50 2011 From: angelflow at yahoo.com (Andy) Date: Fri, 7 Oct 2011 16:56:50 -0700 (PDT) Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> <4E8F8B05.60005@gmail.com> Message-ID: <1318031810.93916.YahooMailNeo@web111310.mail.gq1.yahoo.com> What causes Tornado's memory usage to grow constantly? Is it a bug in PyPy or Tornado? ________________________________ It *might* be the same thing as with tornado where memory usage grows constantly. Justin peel is working on it and it'll be in 1.7 some time soon (it does not have to though, but it does sound remarkably similar) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Oct 8 02:11:36 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 8 Oct 2011 02:11:36 +0200 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: <1318031810.93916.YahooMailNeo@web111310.mail.gq1.yahoo.com> References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> <4E8F8B05.60005@gmail.com> <1318031810.93916.YahooMailNeo@web111310.mail.gq1.yahoo.com> Message-ID: On Sat, Oct 8, 2011 at 1:56 AM, Andy wrote: > What causes Tornado's memory usage to grow constantly? Is it a bug in PyPy > or Tornado? PyPy. It does get freed at *some point*, but some point is too far in the future. It was about objects like sockets and hashes not accounting for memory usage by underlaying structures. > ________________________________ > It *might* be the same thing as with tornado where memory usage grows > constantly. Justin peel is working on it and it'll be in 1.7 some time > soon (it does not have to though, but it does sound remarkably > similar) > > > From osadchiy.ilya at gmail.com Sat Oct 8 11:20:37 2011 From: osadchiy.ilya at gmail.com (Ilya Osadchiy) Date: Sat, 8 Oct 2011 11:20:37 +0200 Subject: [pypy-dev] Bugs in using `easy_install` in PyPy 1.6 on Windows In-Reply-To: References: Message-ID: On Thu, Oct 6, 2011 at 3:37 PM, Amaury Forgeot d'Arc wrote: > I don't think there are tests for an "installed" interpreter at the moment. Perhaps some [separate] tests on "installed" pypy should be added. `easy_install` and friends are rather critical for usability. From fijall at gmail.com Sat Oct 8 11:44:55 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 8 Oct 2011 11:44:55 +0200 Subject: [pypy-dev] Bugs in using `easy_install` in PyPy 1.6 on Windows In-Reply-To: References: Message-ID: On Sat, Oct 8, 2011 at 11:20 AM, Ilya Osadchiy wrote: > On Thu, Oct 6, 2011 at 3:37 PM, Amaury Forgeot d'Arc wrote: >> I don't think there are tests for an "installed" interpreter at the moment. > > Perhaps some [separate] tests on "installed" pypy should be added. > `easy_install` and friends are rather critical for usability. Patches welcome :) Especially when it comes to buildbot infrastructure From jfcgauss at gmail.com Sat Oct 8 11:49:35 2011 From: jfcgauss at gmail.com (Serhat Sevki Dincer) Date: Sat, 8 Oct 2011 12:49:35 +0300 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app Message-ID: My favorite pypy version is 46768: It passes its own tests + It runs Django 1.3.1 :P Igor, did you try this version by any chance? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ipc at informatic.io Fri Oct 7 19:19:45 2011 From: ipc at informatic.io (Ian P. Cooke) Date: Fri, 7 Oct 2011 12:19:45 -0500 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> Message-ID: On Oct 07, 2011, at 11:50 , Maciej Fijalkowski wrote: > On Fri, Oct 7, 2011 at 1:33 PM, Igor Katson wrote: >> [...] >> Though, the need to load the page 500 times after each server reload is not >> comfortable. > [...] > I've heard people using gunicorn. Maybe this is a good try? Loading > the pages is indeed annoying, but you need to load it a couple times > for results not to be noticably slower :) We kind of know that the JIT > warmup time is high, but it's partly a thing to fix and partly an > inherent property of the JIT. > Would adjusting the JIT's threshold and function_threshold settings help here? -- Ian P. Cooke Computer Scientist, Software "The agents of a ubiquitous system stand to it in the same relation as musical instruments to an orchestra", Robin Milner -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor.katson at gmail.com Sat Oct 8 12:39:02 2011 From: igor.katson at gmail.com (Igor Katson) Date: Sat, 08 Oct 2011 14:39:02 +0400 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: Message-ID: <4E902846.3040309@gmail.com> On 10/08/2011 01:49 PM, Serhat Sevki Dincer wrote: > My favorite pypy version is 46768: It passes its own tests + It runs > Django 1.3.1 :P > Igor, did you try this version by any chance? > Well, mine versions run without problems, and I've tried the latest trunk, just this memory bug is present. So I don't think it may help. From max.lavrenov at gmail.com Sat Oct 8 16:05:06 2011 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Sat, 8 Oct 2011 18:05:06 +0400 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: Hello Maciej Thanks to fix it so fast. Unfortunately when i tried building pypy from trunk i got error http://paste.pocoo.org/show/489392/ Best wishes, Max On Fri, Oct 7, 2011 at 22:02, Maciej Fijalkowski wrote: > On Fri, Oct 7, 2011 at 12:24 PM, Max Lavrenov > wrote: > > Hello all > > > > I've written small program that show the problem. > > http://paste.pocoo.org/show/488730/ > > > > You can test it with this line ab -n 1000 http://localhost:8032/ > > > > After 810 request i am starting to get errors > > http://paste.pocoo.org/show/488732/ > > > > if i start it with --jit off it work fine > > > > > >> Yes, please try to find a minimal example that shows the issue. > >> > >> -- > >> Amaury Forgeot d'Arc > > > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > This is fixed on trunk. Can you try nightly? We should release 1.7 > some time soon. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Oct 8 16:50:50 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 8 Oct 2011 16:50:50 +0200 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: Hi Max, On Sat, Oct 8, 2011 at 16:05, Max Lavrenov wrote: > http://paste.pocoo.org/show/489392/ Bah. It's because you have in your path a command called just "python" that runs python 3. I will try to fix that. Armin. From fijall at gmail.com Sat Oct 8 23:06:20 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 8 Oct 2011 23:06:20 +0200 Subject: [pypy-dev] PyPy 1.7 coming Message-ID: Hi I would like to start thinking about PyPy 1.7 and I volunteer for being a release manager. It seems modulo few failing tests we're generally in a good shape. Things I would like to get into. * my json improvements branch * justin's memory pressure branch that will hopefully fix "leak" in tornado Anyone wants to get something else there? Cheers, fijal From exarkun at twistedmatrix.com Sun Oct 9 00:25:14 2011 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Sat, 08 Oct 2011 22:25:14 -0000 Subject: [pypy-dev] PyPy 1.7 coming In-Reply-To: References: Message-ID: <20111008222514.23178.1880737513.divmod.xquotient.18@localhost.localdomain> On 09:06 pm, fijall at gmail.com wrote: >Hi > >I would like to start thinking about PyPy 1.7 and I volunteer for >being a release manager. It seems modulo few failing tests we're >generally in a good shape. Things I would like to get into. > >* my json improvements branch >* justin's memory pressure branch that will hopefully fix "leak" in >tornado > >Anyone wants to get something else there? I think https://bugs.pypy.org/issue889 (cpyext headers) would be nice, particularly if it is just a release bug and nothing deeper. Jean-Paul From fijall at gmail.com Sun Oct 9 00:49:44 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 9 Oct 2011 00:49:44 +0200 Subject: [pypy-dev] PyPy 1.7 coming In-Reply-To: <20111008222514.23178.1880737513.divmod.xquotient.18@localhost.localdomain> References: <20111008222514.23178.1880737513.divmod.xquotient.18@localhost.localdomain> Message-ID: On Sun, Oct 9, 2011 at 12:25 AM, wrote: > On 09:06 pm, fijall at gmail.com wrote: >> >> Hi >> >> I would like to start thinking about PyPy 1.7 and I volunteer for >> being a release manager. It seems modulo few failing tests we're >> generally in a good shape. Things I would like to get into. >> >> * my json improvements branch >> * justin's memory pressure branch that will hopefully fix "leak" in >> tornado >> >> Anyone wants to get something else there? > > I think https://bugs.pypy.org/issue889 (cpyext headers) would be nice, > particularly if it is just a release bug and nothing deeper. > > Jean-Paul oops. Feel like providing a patch for release.py? ;-) From timonator at perpetuum-immobile.de Sun Oct 9 02:26:47 2011 From: timonator at perpetuum-immobile.de (Timo Paulssen) Date: Sun, 09 Oct 2011 02:26:47 +0200 Subject: [pypy-dev] PyPy 1.7 coming In-Reply-To: References: Message-ID: <4E90EA47.1000500@perpetuum-immobile.de> On 08.10.2011 23:06, Maciej Fijalkowski wrote: > Hi > > I would like to start thinking about PyPy 1.7 and I volunteer for > being a release manager. It seems modulo few failing tests we're > generally in a good shape. Things I would like to get into. > > * my json improvements branch > * justin's memory pressure branch that will hopefully fix "leak" in tornado > > Anyone wants to get something else there? > > Cheers, > fijal Hello, I'd like to get my separate-applevel-numpy branch merged in and, if possible, get the numpy-data-buffer merged, too. There's been a little merging accident in my local repository, but I believe it can easily be fixed. cheers - Timo From arigo at tunes.org Sun Oct 9 10:39:42 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 9 Oct 2011 10:39:42 +0200 Subject: [pypy-dev] PyPy 1.7 coming In-Reply-To: References: Message-ID: Hi Maciej, On Sat, Oct 8, 2011 at 23:06, Maciej Fijalkowski wrote: > Anyone wants to get something else there? There is: * continulets/greenlets/stacklets need to be fixed, or disabled, as per https://bugs.pypy.org/issue895 * https://bugs.pypy.org/issue884 on Windows is still terrible * do we still want to get rid of asmgcc? A bient?t, Armin. From ram at rachum.com Sun Oct 9 12:15:15 2011 From: ram at rachum.com (Ram Rachum) Date: Sun, 9 Oct 2011 12:15:15 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: On Fri, Oct 7, 2011 at 11:03 AM, Ram Rachum wrote: > > Hey guys, > > I've managed to produce a VM that shows the bug. > > You can download it here: > > http://dl.dropbox.com/u/1927707/VM%20for%20debugging%20PyPy.rar > > It's a VMWare Virtual Machine, and it weighs 2 GB compressed. > > Once you fire up the VM, there are short instructions in a text file on the > desktop to making PyPy crash. > > Will anyone have time to try that? > > > Thanks, > Ram. > Did anyone give that a try? Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdouche at gmail.com Sun Oct 9 12:16:59 2011 From: sdouche at gmail.com (Sebastien Douche) Date: Sun, 9 Oct 2011 12:16:59 +0200 Subject: [pypy-dev] Feedback about the PPA packages Message-ID: Hi all, I'm testing the packages and I don't notice any problems with the packaging. No complain but just a remark: the good thing with cpython packaging is you can install N versions (py2.6, py2.7...) in the same machine and it doesn't to be the case with pypy packages (am I wrong?). -- Sebastien Douche Twitter : @sdouche From fijall at gmail.com Sun Oct 9 12:25:48 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 9 Oct 2011 12:25:48 +0200 Subject: [pypy-dev] Feedback about the PPA packages In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 12:16 PM, Sebastien Douche wrote: > Hi all, > I'm testing the packages and I don't notice any problems with the > packaging. No complain but just a remark: the good thing with cpython > packaging is you can install N versions (py2.6, py2.7...) in the same > machine and it doesn't to be the case with pypy packages (am I > wrong?). I think you're right, but that's a first step towards making it work :-) Contributions welcomed. From arigo at tunes.org Sun Oct 9 12:29:48 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 9 Oct 2011 12:29:48 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: Hi, On Sun, Oct 9, 2011 at 12:15, Ram Rachum wrote: > Did anyone give that a try? Sorry, it may take a while, because Windows is not our primary platform. In the meantime, could you try with this latest version of pypy? Thanks! http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-latest-win32.zip A bient?t, Armin. From arigo at tunes.org Sun Oct 9 12:34:39 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 9 Oct 2011 12:34:39 +0200 Subject: [pypy-dev] Feedback about the PPA packages In-Reply-To: References: Message-ID: Hi, On Sun, Oct 9, 2011 at 12:25, Maciej Fijalkowski wrote: >> packaging is you can install N versions (py2.6, py2.7...) in the same >> machine and it doesn't to be the case with pypy packages (am I >> wrong?). > > I think you're right, but that's a first step towards making it work > :-) Contributions welcomed. Ah, because they all install to /opt/pypy/ ? That's just a convention; feel free to come up with packages that install PyPy under any path that makes sense to you, like /opt/pypy-1.6/. That path is not hard-coded anywhere. As long as you don't break the directory into separate pieces, it should work fine. A bient?t, Armin. From sdouche at gmail.com Sun Oct 9 15:03:41 2011 From: sdouche at gmail.com (Sebastien Douche) Date: Sun, 9 Oct 2011 15:03:41 +0200 Subject: [pypy-dev] Feedback about the PPA packages In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 12:34, Armin Rigo wrote: Hi Armin > Ah, because they all install to /opt/pypy/ ? No, most of them are goods, but the frontend is "pypy" and not "pypy1.6" (same thing with the man and few others things). python2.7: /usr/bin/python2.7 /usr/share/man/man1/python2.7.1.gz pypy1.6: /usr/bin/pypy /usr/share/man/man1/pypy.1.gz As Maciej said, it's a first step (and a good one!). -- Sebastien Douche Twitter : @sdouche From anto.cuni at gmail.com Sun Oct 9 16:26:10 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sun, 09 Oct 2011 16:26:10 +0200 Subject: [pypy-dev] PyPy 1.7 coming In-Reply-To: References: Message-ID: <4E91AF02.6070607@gmail.com> Hi Maciek, On 08/10/11 23:06, Maciej Fijalkowski wrote: > Hi > > I would like to start thinking about PyPy 1.7 and I volunteer for > being a release manager. It seems modulo few failing tests we're > generally in a good shape. Things I would like to get into. > > * my json improvements branch > * justin's memory pressure branch that will hopefully fix "leak" in tornado > > Anyone wants to get something else there? I'd like to have ffistruct in it. However, I didn't have much time to work on it lately, so it's unclear how long it will take to finish. I suppose it mostly depends on how quickly you want to release 1.7. If it's in few days, no hope for ffistruct. If it's few weeks, I might be able to do it. ciao, Anto From fijall at gmail.com Sun Oct 9 17:02:33 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 9 Oct 2011 17:02:33 +0200 Subject: [pypy-dev] PyPy 1.7 coming In-Reply-To: <4E91AF02.6070607@gmail.com> References: <4E91AF02.6070607@gmail.com> Message-ID: On Sun, Oct 9, 2011 at 4:26 PM, Antonio Cuni wrote: > Hi Maciek, > > On 08/10/11 23:06, Maciej Fijalkowski wrote: >> >> Hi >> >> I would like to start thinking about PyPy 1.7 and I volunteer for >> being a release manager. It seems modulo few failing tests we're >> generally in a good shape. Things I would like to get into. >> >> * my json improvements branch >> * justin's memory pressure branch that will hopefully fix "leak" in >> tornado >> >> Anyone wants to get something else there? > > I'd like to have ffistruct in it. However, I didn't have much time to work > on it lately, so it's unclear how long it will take to finish. > > I suppose it mostly depends on how quickly you want to release 1.7. If it's > in few days, no hope for ffistruct. ?If it's few weeks, I might be able to > do it. > > ciao, > Anto > As usual with pypy, it's hopefully next week, but then we can have another release this year with no real issues. From igor.katson at gmail.com Sun Oct 9 20:58:03 2011 From: igor.katson at gmail.com (Igor Katson) Date: Sun, 09 Oct 2011 22:58:03 +0400 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> <4E8F8B05.60005@gmail.com> Message-ID: <4E91EEBB.5050602@gmail.com> On 10/08/2011 03:35 AM, Maciej Fijalkowski wrote: > On Sat, Oct 8, 2011 at 1:28 AM, Igor Katson wrote: >> On 10/08/2011 02:50 AM, Maciej Fijalkowski wrote: >>> On Sat, Oct 8, 2011 at 12:48 AM, Andy wrote: >>>> 15 times more memory? That's a lot. >>>> Interestingly Quora reported that their PyPy processes were only 50% >>>> larger >>>> than CPython ones: >>>> >>>> http://www.quora.com/Quora-Infrastructure/Did-Quoras-switch-to-PyPy-result-in-increased-memory-consumption >>>> >>>> "our PyPy worker processes themselves take approximately 50% more memory >>>> than our equivalent CPython worker processes, although we did not do a >>>> large >>>> amount of tuning of the GC. Regardless, this wasn't the main cause of our >>>> memory blowup. >>>> "In our development, we found that certain functions were not worth being >>>> ported from their C libraries to pure Python, things like >>>> >>>> crypto >>>> >>>> , >>>> >>>> lxml >>>> >>>> , >>>> >>>> PyML >>>> >>>> , and a couple other random libraries. Our solution for those functions >>>> was >>>> to run a parallel CPython process that would do nothing but take >>>> arguments >>>> via an >>>> >>>> execnet >>>> >>>> channel, and output return values via the same >>>> >>>> execnet >>>> >>>> channel. >>>> >>>> "The overhead for some of these Python processes, especially for the ones >>>> that required a lot of state (for example, >>>> >>>> PyML >>>> >>>> ) is comparable to the amount of memory taken by the master PyPy process, >>>> effectively causing a 2-3x blowup in memory just to maintain the CPython >>>> processes; this is our main memory sink for our PyPy branch." >>>> ---- >>>> I wonder what accounts for this large difference in PyPy memory >>>> consumption >>>> (50% more vs. 1,400% more). What type of "large amount of tuning of the >>>> GC" >>>> did Quora do? >>> I think this is a bug, but also different stack was used right? >>> Indeed, pypy should not use much more than 2x of CPython usage, I >>> would like to give it a go if you can come up with a small >>> reproducible example. >>> >>> Cheers, >>> fijal >> yeah, I will send you the test suite in a while. This is a bit another >> setup: same site with no data and sqlite instead of pypq, but it's clear >> that the memory usage is also huge, though far more requests are needed to >> bump memory usage to 200mb. cPython memory usage is constant. >> > It *might* be the same thing as with tornado where memory usage grows > constantly. Justin peel is working on it and it'll be in 1.7 some time > soon (it does not have to though, but it does sound remarkably > similar) I tried with that branch, but there is no difference. Will you try to debug it with the stuff I gave you? From fijall at gmail.com Sun Oct 9 21:15:17 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 9 Oct 2011 21:15:17 +0200 Subject: [pypy-dev] Benchmarking PyPy performance on real-world Django app In-Reply-To: <4E91EEBB.5050602@gmail.com> References: <4E8EE387.2070806@gmail.com> <4E8F3FF9.3070907@gmail.com> <1318027691.39261.YahooMailNeo@web111309.mail.gq1.yahoo.com> <4E8F8B05.60005@gmail.com> <4E91EEBB.5050602@gmail.com> Message-ID: On Sun, Oct 9, 2011 at 8:58 PM, Igor Katson wrote: > On 10/08/2011 03:35 AM, Maciej Fijalkowski wrote: >> >> On Sat, Oct 8, 2011 at 1:28 AM, Igor Katson ?wrote: >>> >>> On 10/08/2011 02:50 AM, Maciej Fijalkowski wrote: >>>> >>>> On Sat, Oct 8, 2011 at 12:48 AM, Andy ? ?wrote: >>>>> >>>>> 15 times more memory? That's a lot. >>>>> Interestingly Quora reported that their PyPy processes were only 50% >>>>> larger >>>>> than CPython ones: >>>>> >>>>> >>>>> http://www.quora.com/Quora-Infrastructure/Did-Quoras-switch-to-PyPy-result-in-increased-memory-consumption >>>>> >>>>> "our PyPy worker processes themselves take approximately 50% more >>>>> memory >>>>> than our equivalent CPython worker processes, although we did not do a >>>>> large >>>>> amount of tuning of the GC. Regardless, this wasn't the main cause of >>>>> our >>>>> memory blowup. >>>>> "In our development, we found that certain functions were not worth >>>>> being >>>>> ported from their C libraries to pure Python, things like >>>>> >>>>> crypto >>>>> >>>>> , >>>>> >>>>> lxml >>>>> >>>>> , >>>>> >>>>> PyML >>>>> >>>>> , and a couple other random libraries. Our solution for those functions >>>>> was >>>>> to run a parallel CPython process that would do nothing but take >>>>> arguments >>>>> via an >>>>> >>>>> execnet >>>>> >>>>> channel, and output return values via the same >>>>> >>>>> execnet >>>>> >>>>> ?channel. >>>>> >>>>> "The overhead for some of these Python processes, especially for the >>>>> ones >>>>> that required a lot of state (for example, >>>>> >>>>> PyML >>>>> >>>>> ) is comparable to the amount of memory taken by the master PyPy >>>>> process, >>>>> effectively causing a 2-3x blowup in memory just to maintain the >>>>> CPython >>>>> processes; this is our main memory sink for our PyPy branch." >>>>> ---- >>>>> I wonder what accounts for this large difference in PyPy memory >>>>> consumption >>>>> (50% more vs. 1,400% more). What type of "large amount of tuning of the >>>>> GC" >>>>> did Quora do? >>>> >>>> I think this is a bug, but also different stack was used right? >>>> Indeed, pypy should not use much more than 2x of CPython usage, I >>>> would like to give it a go if you can come up with a small >>>> reproducible example. >>>> >>>> Cheers, >>>> fijal >>> >>> yeah, I will send you the test suite in a while. This is a bit another >>> setup: same site with no data and sqlite instead of pypq, but it's clear >>> that the memory usage is also huge, though far more requests are needed >>> to >>> bump memory usage to 200mb. cPython memory usage is constant. >>> >> It *might* be the same thing as with tornado where memory usage grows >> constantly. Justin peel is working on it and it'll be in 1.7 some time >> soon (it does not have to though, but it does sound remarkably >> similar) > > I tried with that branch, but there is no difference. Will you try to debug > it with the stuff I gave you? > Well, the branch is not ready yet, so no point. Yes, we're trying. From arigo at tunes.org Sun Oct 9 21:18:09 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 9 Oct 2011 21:18:09 +0200 Subject: [pypy-dev] Feedback about the PPA packages In-Reply-To: References: Message-ID: Hi, On Sun, Oct 9, 2011 at 15:03, Sebastien Douche wrote: > /usr/bin/pypy Ah, no, nothing depends on the particular name of this symlink. A bient?t, Armin. From max.lavrenov at gmail.com Mon Oct 10 12:49:22 2011 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Mon, 10 Oct 2011 14:49:22 +0400 Subject: [pypy-dev] strange error in urlunparse In-Reply-To: References: Message-ID: Hi all It's work now! First tests show 30% speed up in our applications. Thanks everyone for the help! Best wished, Max Lavrenov On Fri, Oct 7, 2011 at 22:02, Maciej Fijalkowski wrote: > On Fri, Oct 7, 2011 at 12:24 PM, Max Lavrenov > wrote: > > Hello all > > > > I've written small program that show the problem. > > http://paste.pocoo.org/show/488730/ > > > > You can test it with this line ab -n 1000 http://localhost:8032/ > > > > After 810 request i am starting to get errors > > http://paste.pocoo.org/show/488732/ > > > > if i start it with --jit off it work fine > > > > > >> Yes, please try to find a minimal example that shows the issue. > >> > >> -- > >> Amaury Forgeot d'Arc > > > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > This is fixed on trunk. Can you try nightly? We should release 1.7 > some time soon. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ram at rachum.com Mon Oct 10 13:11:01 2011 From: ram at rachum.com (Ram Rachum) Date: Mon, 10 Oct 2011 13:11:01 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: Trying to download this file results in getting a tiny corrupted archive. Also, I don't know whether this is a source release or a binary release. I don't know to compile so I can only use a binary one. On Sun, Oct 9, 2011 at 12:29 PM, Armin Rigo wrote: > Hi, > > On Sun, Oct 9, 2011 at 12:15, Ram Rachum wrote: > > Did anyone give that a try? > > Sorry, it may take a while, because Windows is not our primary > platform. In the meantime, could you try with this latest version of > pypy? Thanks! > http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-latest-win32.zip > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Oct 10 16:58:52 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 10 Oct 2011 16:58:52 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: Hi, On Mon, Oct 10, 2011 at 13:11, Ram Rachum wrote: > Trying to download this file results in getting a tiny corrupted archive. > Also, I don't know whether this is a source release or a binary release. I > don't know to compile so I can only use a binary one. Bah, I don't know why the recent zip files are all empty. Here is the latest non-empty one: http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-47320-6b92b3aa1cbb-win32.zip It's a binary release. A bient?t, Armin. From ram at rachum.com Mon Oct 10 17:13:06 2011 From: ram at rachum.com (Ram Rachum) Date: Mon, 10 Oct 2011 17:13:06 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: Tried it now with this Zip, getting the same crash. On Mon, Oct 10, 2011 at 4:58 PM, Armin Rigo wrote: > Hi, > > On Mon, Oct 10, 2011 at 13:11, Ram Rachum wrote: > > Trying to download this file results in getting a tiny corrupted archive. > > Also, I don't know whether this is a source release or a binary release. > I > > don't know to compile so I can only use a binary one. > > Bah, I don't know why the recent zip files are all empty. Here is the > latest non-empty one: > > http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-47320-6b92b3aa1cbb-win32.zip > > It's a binary release. > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Tue Oct 11 17:01:05 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 11 Oct 2011 17:01:05 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E6F3FEF.5080600@gmx.de> References: <4E6F3FEF.5080600@gmx.de> Message-ID: <4E945A31.80702@gmail.com> On 13/09/11 13:35, Carl Friedrich Bolz wrote: > Some of us need to be in Stockholm Oct 24 and 28. > Anto needs to be with his family Nov 1. > Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. > > Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd through > Thursday Nov 10. > > fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be speaking > yet. > > What do the rest of you think of this idea? Did we decide anything wrt the Gothenburg sprint in November? I think it's time booking the flights, else the might become expensive. We should decide the dates and publish the sprint announcement. Probably, I'll be able to come only on the 4th, and then I might stay until the 13th, to attend fscons. ciao, Anto From hakan at debian.org Tue Oct 11 16:54:16 2011 From: hakan at debian.org (Hakan Ardo) Date: Tue, 11 Oct 2011 16:54:16 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E945A31.80702@gmail.com> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> Message-ID: Hi, sorry for not answering earlier, but the proposed sprint dates (nov 2-10) will most likely work for me too. Can we make bridges a sprint topic? On Tue, Oct 11, 2011 at 5:01 PM, Antonio Cuni wrote: > On 13/09/11 13:35, Carl Friedrich Bolz wrote: > >> Some of us need to be in Stockholm Oct 24 and 28. >> Anto needs to be with his family Nov 1. >> Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. >> >> Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd through >> Thursday Nov 10. >> >> fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be speaking >> yet. >> >> What do the rest of you think of this idea? > > > Did we decide anything wrt the Gothenburg sprint in November? ?I think it's > time booking the flights, else the might become expensive. > > We should decide the dates and publish the sprint announcement. > > Probably, I'll be able to come only on the 4th, and then I might stay until > the 13th, to attend fscons. > > ciao, > Anto > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- H?kan Ard? From bea at changemaker.nu Tue Oct 11 21:14:04 2011 From: bea at changemaker.nu (Bea During) Date: Tue, 11 Oct 2011 21:14:04 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E945A31.80702@gmail.com> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> Message-ID: <4E94957C.50108@changemaker.nu> Hi there Antonio Cuni skrev 2011-10-11 17:01: > On 13/09/11 13:35, Carl Friedrich Bolz wrote: > >> Some of us need to be in Stockholm Oct 24 and 28. >> Anto needs to be with his family Nov 1. >> Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. >> >> Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd >> through >> Thursday Nov 10. >> >> fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be >> speaking >> yet. >> >> What do the rest of you think of this idea? > > > Did we decide anything wrt the Gothenburg sprint in November? I think > it's time booking the flights, else the might become expensive. > > We should decide the dates and publish the sprint announcement. > > Probably, I'll be able to come only on the 4th, and then I might stay > until the 13th, to attend fscons. > > ciao, > Anto I talked to Laura about this today and she wanted me to email the following and start to push for a sprint announcement (which apparently not needed since the discussion has been reboote): - the sprint need to take place at the Open End office due to bathroom logistic angst in Lauras/Jacobs house (apparently all apartements need to revamp some of the plumbing which collides with the sprint) So we need to add directions to Open End facilities to the sprint announcement: Open End AB, Norra ?gatan 10, 416 64 G?teborg, see http://kartor.eniro.se/m/ajEHO - suggested dates that we suggest is as above: Wednesday 2nd of Nov to Thursday 10th of Nov - due to the logistical mess of the bathroom situation at Jacob and Lauras place they will not be able to host people sleeping over other in the same manner as they usually do (Anto - there is a bed for you as promised though). So we need to recommend some cheap places for booking accommodations as well (Chalmers Studenthem) - as for topics, it seems Maciej suggests a 1.7 release before the sprint so we need to check the todo list Cheers Bea From fijall at gmail.com Tue Oct 11 21:41:59 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 11 Oct 2011 21:41:59 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E94957C.50108@changemaker.nu> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> Message-ID: On Tue, Oct 11, 2011 at 9:14 PM, Bea During wrote: > Hi there > > Antonio Cuni skrev 2011-10-11 17:01: >> >> On 13/09/11 13:35, Carl Friedrich Bolz wrote: >> >>> Some of us need to be in Stockholm Oct 24 and 28. >>> Anto needs to be with his family Nov 1. >>> Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. >>> >>> Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd >>> through >>> Thursday Nov 10. >>> >>> fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be >>> speaking >>> yet. >>> >>> What do the rest of you think of this idea? >> >> >> Did we decide anything wrt the Gothenburg sprint in November? ?I think >> it's time booking the flights, else the might become expensive. >> >> We should decide the dates and publish the sprint announcement. >> >> Probably, I'll be able to come only on the 4th, and then I might stay >> until the 13th, to attend fscons. >> >> ciao, >> Anto > > I talked to Laura about this today and she wanted me to email the following > and start to push for a sprint announcement (which apparently not needed > since the discussion has been reboote): > > - the sprint need to take place at the Open End office due to bathroom > logistic angst in Lauras/Jacobs house (apparently all apartements need to > revamp some of the plumbing which collides with the sprint) > So we need to add directions to Open End facilities to the sprint > announcement: Open End AB, Norra ?gatan 10, 416 64 G?teborg, see > http://kartor.eniro.se/m/ajEHO > > - suggested dates that we suggest is as above: Wednesday 2nd of Nov to > Thursday 10th of Nov > > - due to the logistical mess of the bathroom situation at Jacob and Lauras > place they will not be able to host people sleeping over other in the same > manner as they usually do (Anto - there is a bed for you as promised > though). So we need to recommend some cheap places for booking > accommodations as well (Chalmers Studenthem) > > - as for topics, it seems Maciej suggests a 1.7 release before the sprint so > we need to check the todo list > > Cheers > > Bea > Hi Maybe with logistics being harder we should move the sprint to some cheaper place? (I do appreciate OE lending us a room though) Cheers, fijal From fijall at gmail.com Tue Oct 11 21:42:54 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 11 Oct 2011 21:42:54 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> Message-ID: On Tue, Oct 11, 2011 at 9:41 PM, Maciej Fijalkowski wrote: > On Tue, Oct 11, 2011 at 9:14 PM, Bea During wrote: >> Hi there >> >> Antonio Cuni skrev 2011-10-11 17:01: >>> >>> On 13/09/11 13:35, Carl Friedrich Bolz wrote: >>> >>>> Some of us need to be in Stockholm Oct 24 and 28. >>>> Anto needs to be with his family Nov 1. >>>> Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. >>>> >>>> Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd >>>> through >>>> Thursday Nov 10. >>>> >>>> fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be >>>> speaking >>>> yet. >>>> >>>> What do the rest of you think of this idea? >>> >>> >>> Did we decide anything wrt the Gothenburg sprint in November? ?I think >>> it's time booking the flights, else the might become expensive. >>> >>> We should decide the dates and publish the sprint announcement. >>> >>> Probably, I'll be able to come only on the 4th, and then I might stay >>> until the 13th, to attend fscons. >>> >>> ciao, >>> Anto >> >> I talked to Laura about this today and she wanted me to email the following >> and start to push for a sprint announcement (which apparently not needed >> since the discussion has been reboote): >> >> - the sprint need to take place at the Open End office due to bathroom >> logistic angst in Lauras/Jacobs house (apparently all apartements need to >> revamp some of the plumbing which collides with the sprint) >> So we need to add directions to Open End facilities to the sprint >> announcement: Open End AB, Norra ?gatan 10, 416 64 G?teborg, see >> http://kartor.eniro.se/m/ajEHO >> >> - suggested dates that we suggest is as above: Wednesday 2nd of Nov to >> Thursday 10th of Nov >> >> - due to the logistical mess of the bathroom situation at Jacob and Lauras >> place they will not be able to host people sleeping over other in the same >> manner as they usually do (Anto - there is a bed for you as promised >> though). So we need to recommend some cheap places for booking >> accommodations as well (Chalmers Studenthem) >> >> - as for topics, it seems Maciej suggests a 1.7 release before the sprint so >> we need to check the todo list >> >> Cheers >> >> Bea >> > > Hi > > Maybe with logistics being harder we should move the sprint to some > cheaper place? (I do appreciate OE lending us a room though) > > Cheers, > fijal > For one I can host a sprint at my mountain hut. It has internet and cheap to very cheap accomodation, it's however definitely less conviniently placed (about 2h by train from prague). Cheers, fijal From binarycrusader at gmail.com Wed Oct 12 00:47:07 2011 From: binarycrusader at gmail.com (Shawn Walker) Date: Tue, 11 Oct 2011 15:47:07 -0700 Subject: [pypy-dev] pypy translation error Message-ID: Greetings, I've been working on getting pypy to build on Solaris 11 Express using gcc 4.5. I could use some help deciphering the following error I encountered during translation using "python2.6 translate.py -Ojit": [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "translate.py", line 308, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", line 116, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, in _do [translation:ERROR] res = func() [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, in task_annotate [translation:ERROR] s = annotator.build_types(self.entry_point, self.inputtypes) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 103, in build_types [translation:ERROR] return self.build_graph_types(flowgraph, inputcells, complete_now=complete_now) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 194, in build_graph_types [translation:ERROR] self.complete() [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 250, in complete [translation:ERROR] self.processblock(graph, block) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 448, in processblock [translation:ERROR] self.flowin(graph, block) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 508, in flowin [translation:ERROR] self.consider_op(block.operations[i]) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 710, in consider_op [translation:ERROR] raise_nicer_exception(op, str(graph)) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 707, in consider_op [translation:ERROR] resultcell = consider_meth(*argcells) [translation:ERROR] File "<4151-codegen /export/home/swalker/devel/pypy/pypy/annotation/annrpython.py:745>", line 3, in consider_op_simple_call [translation:ERROR] return arg.simple_call(*args) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line 175, in simple_call [translation:ERROR] return obj.call(getbookkeeper().build_args("simple_call", args_s)) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line 696, in call [translation:ERROR] return bookkeeper.pbc_call(pbc, args) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/bookkeeper.py", line 667, in pbc_call [translation:ERROR] results.append(desc.pycall(schedule, args, s_previous_result, op)) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line 798, in pycall [translation:ERROR] return self.funcdesc.pycall(schedule, args, s_previous_result, op) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line 283, in pycall [translation:ERROR] result = self.specialize(inputcells, op) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line 279, in specialize [translation:ERROR] return self.specializer(self, inputcells) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/specialize.py", line 80, in default_specialize [translation:ERROR] graph = funcdesc.cachedgraph(key, builder=builder) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line 237, in cachedgraph [translation:ERROR] graph = self.buildgraph(alt_name, builder) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line 200, in buildgraph [translation:ERROR] graph = translator.buildflowgraph(self.pyobj) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/translator.py", line 77, in buildflowgraph [translation:ERROR] graph = space.build_flow(func) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/objspace/flow/objspace.py", line 279, in build_flow [translation:ERROR] ec.build_flow() [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/objspace/flow/flowcontext.py", line 273, in build_flow [translation:ERROR] self.space.unwrap(e.get_w_value(self.space)))) [translation:ERROR] Exception': found an operation that always raises AttributeError: generated by a constant operation: getattr [translation:ERROR] .. v119 = simple_call(v118, fd_0) [translation:ERROR] .. '(pypy.rlib.rsocket:269)PacketAddress.as_object' [translation:ERROR] Processing block: [translation:ERROR] block at 3 is a [translation:ERROR] in (pypy.rlib.rsocket:269)PacketAddress.as_object [translation:ERROR] containing the following operations: [translation:ERROR] v120 = getattr(space_0, ('newtuple')) [translation:ERROR] v121 = getattr(space_0, ('wrap')) [translation:ERROR] v118 = getattr(self_0, ('get_ifname')) [translation:ERROR] v119 = simple_call(v118, fd_0) [translation:ERROR] v122 = simple_call(v121, v119) [translation:ERROR] v123 = getattr(space_0, ('wrap')) [translation:ERROR] v124 = getattr(self_0, ('get_protocol')) [translation:ERROR] v125 = simple_call(v124) [translation:ERROR] v126 = simple_call(v123, v125) [translation:ERROR] v127 = getattr(space_0, ('wrap')) [translation:ERROR] v128 = getattr(self_0, ('get_pkttype')) [translation:ERROR] v129 = simple_call(v128) [translation:ERROR] v130 = simple_call(v127, v129) [translation:ERROR] v131 = getattr(space_0, ('wrap')) [translation:ERROR] v132 = getattr(self_0, ('get_hatype')) [translation:ERROR] v133 = simple_call(v132) [translation:ERROR] v134 = simple_call(v131, v133) [translation:ERROR] v135 = getattr(space_0, ('wrap')) [translation:ERROR] v136 = getattr(self_0, ('get_addr')) [translation:ERROR] v137 = simple_call(v136) [translation:ERROR] v138 = simple_call(v135, v137) [translation:ERROR] v139 = newlist(v122, v126, v130, v134, v138) [translation:ERROR] v140 = simple_call(v120, v139) [translation:ERROR] --end-- If I understand the above error correctly, it's either saying get_ifname doesn't exist as an attribute (function) or that as_object doesn't exist. I'm not familiar with pypy's translation errors, so I'm uncertain how to interpret the above. Any assistance is appreciated, -- Shawn Walker From fijall at gmail.com Wed Oct 12 00:56:36 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 12 Oct 2011 00:56:36 +0200 Subject: [pypy-dev] pypy translation error In-Reply-To: References: Message-ID: On Wed, Oct 12, 2011 at 12:47 AM, Shawn Walker wrote: > Greetings, > > I've been working on getting pypy to build on Solaris 11 Express using gcc 4.5. Hi. In general Solaris is not a supported platform, so expect problems (we don't happen to have a buildbot or a maintainer), we can try to help however. > > I could use some help deciphering the following error I encountered > during translation using "python2.6 translate.py -Ojit": > > [translation:ERROR] Error: > [translation:ERROR] ?Traceback (most recent call last): > [translation:ERROR] ? ?File "translate.py", line 308, in main > [translation:ERROR] ? ? drv.proceed(goals) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, > in proceed > [translation:ERROR] ? ? return self._execute(goals, task_skip = > self._maybe_skip()) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", > line 116, in _execute > [translation:ERROR] ? ? res = self._do(goal, taskcallable, *args, **kwds) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, > in _do > [translation:ERROR] ? ? res = func() > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, > in task_annotate > [translation:ERROR] ? ? s = annotator.build_types(self.entry_point, > self.inputtypes) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 103, in build_types > [translation:ERROR] ? ? return self.build_graph_types(flowgraph, > inputcells, complete_now=complete_now) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 194, in build_graph_types > [translation:ERROR] ? ? self.complete() > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 250, in complete > [translation:ERROR] ? ? self.processblock(graph, block) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 448, in processblock > [translation:ERROR] ? ? self.flowin(graph, block) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 508, in flowin > [translation:ERROR] ? ? self.consider_op(block.operations[i]) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 710, in consider_op > [translation:ERROR] ? ? raise_nicer_exception(op, str(graph)) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 707, in consider_op > [translation:ERROR] ? ? resultcell = consider_meth(*argcells) > [translation:ERROR] ? ?File "<4151-codegen > /export/home/swalker/devel/pypy/pypy/annotation/annrpython.py:745>", > line 3, in consider_op_simple_call > [translation:ERROR] ? ? return arg.simple_call(*args) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line > 175, in simple_call > [translation:ERROR] ? ? return > obj.call(getbookkeeper().build_args("simple_call", args_s)) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line > 696, in call > [translation:ERROR] ? ? return bookkeeper.pbc_call(pbc, args) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/bookkeeper.py", line > 667, in pbc_call > [translation:ERROR] ? ? results.append(desc.pycall(schedule, args, > s_previous_result, op)) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line > 798, in pycall > [translation:ERROR] ? ? return self.funcdesc.pycall(schedule, args, > s_previous_result, op) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line > 283, in pycall > [translation:ERROR] ? ? result = self.specialize(inputcells, op) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line > 279, in specialize > [translation:ERROR] ? ? return self.specializer(self, inputcells) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/specialize.py", line > 80, in default_specialize > [translation:ERROR] ? ? graph = funcdesc.cachedgraph(key, builder=builder) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line > 237, in cachedgraph > [translation:ERROR] ? ? graph = self.buildgraph(alt_name, builder) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line > 200, in buildgraph > [translation:ERROR] ? ? graph = translator.buildflowgraph(self.pyobj) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/translator.py", line > 77, in buildflowgraph > [translation:ERROR] ? ? graph = space.build_flow(func) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/objspace/flow/objspace.py", line > 279, in build_flow > [translation:ERROR] ? ? ec.build_flow() > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/objspace/flow/flowcontext.py", > line 273, in build_flow > [translation:ERROR] ? ? self.space.unwrap(e.get_w_value(self.space)))) > [translation:ERROR] ?Exception': found an operation that always raises > AttributeError: generated by a constant operation: ?getattr > [translation:ERROR] ? ? ?.. v119 = simple_call(v118, fd_0) > [translation:ERROR] ? ? ?.. '(pypy.rlib.rsocket:269)PacketAddress.as_object' > [translation:ERROR] Processing block: > [translation:ERROR] ?block at 3 is a 'pypy.objspace.flow.flowcontext.SpamBlock'> > [translation:ERROR] ?in (pypy.rlib.rsocket:269)PacketAddress.as_object > [translation:ERROR] ?containing the following operations: > [translation:ERROR] ? ? ? ?v120 = getattr(space_0, ('newtuple')) > [translation:ERROR] ? ? ? ?v121 = getattr(space_0, ('wrap')) > [translation:ERROR] ? ? ? ?v118 = getattr(self_0, ('get_ifname')) > [translation:ERROR] ? ? ? ?v119 = simple_call(v118, fd_0) > [translation:ERROR] ? ? ? ?v122 = simple_call(v121, v119) > [translation:ERROR] ? ? ? ?v123 = getattr(space_0, ('wrap')) > [translation:ERROR] ? ? ? ?v124 = getattr(self_0, ('get_protocol')) > [translation:ERROR] ? ? ? ?v125 = simple_call(v124) > [translation:ERROR] ? ? ? ?v126 = simple_call(v123, v125) > [translation:ERROR] ? ? ? ?v127 = getattr(space_0, ('wrap')) > [translation:ERROR] ? ? ? ?v128 = getattr(self_0, ('get_pkttype')) > [translation:ERROR] ? ? ? ?v129 = simple_call(v128) > [translation:ERROR] ? ? ? ?v130 = simple_call(v127, v129) > [translation:ERROR] ? ? ? ?v131 = getattr(space_0, ('wrap')) > [translation:ERROR] ? ? ? ?v132 = getattr(self_0, ('get_hatype')) > [translation:ERROR] ? ? ? ?v133 = simple_call(v132) > [translation:ERROR] ? ? ? ?v134 = simple_call(v131, v133) > [translation:ERROR] ? ? ? ?v135 = getattr(space_0, ('wrap')) > [translation:ERROR] ? ? ? ?v136 = getattr(self_0, ('get_addr')) > [translation:ERROR] ? ? ? ?v137 = simple_call(v136) > [translation:ERROR] ? ? ? ?v138 = simple_call(v135, v137) > [translation:ERROR] ? ? ? ?v139 = newlist(v122, v126, v130, v134, v138) > [translation:ERROR] ? ? ? ?v140 = simple_call(v120, v139) > [translation:ERROR] ?--end-- > > If I understand the above error correctly, it's either saying > get_ifname doesn't exist as an attribute (function) or that as_object > doesn't exist. > > I'm not familiar with pypy's translation errors, so I'm uncertain how > to interpret the above. > > Any assistance is appreciated, > -- > Shawn Walker Seems as_object does not exist. I suggest first running rsocket tests and seeing why it's not there by reading the source code (you don't have to translate for that). Cheers, fijal From binarycrusader at gmail.com Wed Oct 12 01:39:17 2011 From: binarycrusader at gmail.com (Shawn Walker) Date: Tue, 11 Oct 2011 16:39:17 -0700 Subject: [pypy-dev] pypy translation error In-Reply-To: References: Message-ID: On 11 October 2011 15:56, Maciej Fijalkowski wrote: > On Wed, Oct 12, 2011 at 12:47 AM, Shawn Walker wrote: >> Greetings, >> >> I've been working on getting pypy to build on Solaris 11 Express using gcc 4.5. > > Hi. In general Solaris is not a supported platform, so expect problems > (we don't happen to have a buildbot or a maintainer), we can try to > help however. Yes, that was apparent, but the help is appreciated. With some small tweaks most of pypy is compiling, so I think it shouldn't take much. I've figured out a lot of other issues already. >> >> I could use some help deciphering the following error I encountered >> during translation using "python2.6 translate.py -Ojit": >> >> [translation:ERROR] Error: >> [translation:ERROR] ?Traceback (most recent call last): >> [translation:ERROR] ? ?File "translate.py", line 308, in main >> [translation:ERROR] ? ? drv.proceed(goals) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, >> in proceed >> [translation:ERROR] ? ? return self._execute(goals, task_skip = >> self._maybe_skip()) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", >> line 116, in _execute >> [translation:ERROR] ? ? res = self._do(goal, taskcallable, *args, **kwds) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, >> in _do >> [translation:ERROR] ? ? res = func() >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, >> in task_annotate >> [translation:ERROR] ? ? s = annotator.build_types(self.entry_point, >> self.inputtypes) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 103, in build_types >> [translation:ERROR] ? ? return self.build_graph_types(flowgraph, >> inputcells, complete_now=complete_now) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 194, in build_graph_types >> [translation:ERROR] ? ? self.complete() >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 250, in complete >> [translation:ERROR] ? ? self.processblock(graph, block) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 448, in processblock >> [translation:ERROR] ? ? self.flowin(graph, block) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 508, in flowin >> [translation:ERROR] ? ? self.consider_op(block.operations[i]) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 710, in consider_op >> [translation:ERROR] ? ? raise_nicer_exception(op, str(graph)) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 707, in consider_op >> [translation:ERROR] ? ? resultcell = consider_meth(*argcells) >> [translation:ERROR] ? ?File "<4151-codegen >> /export/home/swalker/devel/pypy/pypy/annotation/annrpython.py:745>", >> line 3, in consider_op_simple_call >> [translation:ERROR] ? ? return arg.simple_call(*args) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line >> 175, in simple_call >> [translation:ERROR] ? ? return >> obj.call(getbookkeeper().build_args("simple_call", args_s)) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line >> 696, in call >> [translation:ERROR] ? ? return bookkeeper.pbc_call(pbc, args) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/bookkeeper.py", line >> 667, in pbc_call >> [translation:ERROR] ? ? results.append(desc.pycall(schedule, args, >> s_previous_result, op)) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >> 798, in pycall >> [translation:ERROR] ? ? return self.funcdesc.pycall(schedule, args, >> s_previous_result, op) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >> 283, in pycall >> [translation:ERROR] ? ? result = self.specialize(inputcells, op) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >> 279, in specialize >> [translation:ERROR] ? ? return self.specializer(self, inputcells) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/specialize.py", line >> 80, in default_specialize >> [translation:ERROR] ? ? graph = funcdesc.cachedgraph(key, builder=builder) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >> 237, in cachedgraph >> [translation:ERROR] ? ? graph = self.buildgraph(alt_name, builder) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >> 200, in buildgraph >> [translation:ERROR] ? ? graph = translator.buildflowgraph(self.pyobj) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/translator.py", line >> 77, in buildflowgraph >> [translation:ERROR] ? ? graph = space.build_flow(func) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/objspace/flow/objspace.py", line >> 279, in build_flow >> [translation:ERROR] ? ? ec.build_flow() >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/objspace/flow/flowcontext.py", >> line 273, in build_flow >> [translation:ERROR] ? ? self.space.unwrap(e.get_w_value(self.space)))) >> [translation:ERROR] ?Exception': found an operation that always raises >> AttributeError: generated by a constant operation: ?getattr >> [translation:ERROR] ? ? ?.. v119 = simple_call(v118, fd_0) >> [translation:ERROR] ? ? ?.. '(pypy.rlib.rsocket:269)PacketAddress.as_object' >> [translation:ERROR] Processing block: >> [translation:ERROR] ?block at 3 is a > 'pypy.objspace.flow.flowcontext.SpamBlock'> >> [translation:ERROR] ?in (pypy.rlib.rsocket:269)PacketAddress.as_object >> [translation:ERROR] ?containing the following operations: >> [translation:ERROR] ? ? ? ?v120 = getattr(space_0, ('newtuple')) >> [translation:ERROR] ? ? ? ?v121 = getattr(space_0, ('wrap')) >> [translation:ERROR] ? ? ? ?v118 = getattr(self_0, ('get_ifname')) >> [translation:ERROR] ? ? ? ?v119 = simple_call(v118, fd_0) >> [translation:ERROR] ? ? ? ?v122 = simple_call(v121, v119) >> [translation:ERROR] ? ? ? ?v123 = getattr(space_0, ('wrap')) >> [translation:ERROR] ? ? ? ?v124 = getattr(self_0, ('get_protocol')) >> [translation:ERROR] ? ? ? ?v125 = simple_call(v124) >> [translation:ERROR] ? ? ? ?v126 = simple_call(v123, v125) >> [translation:ERROR] ? ? ? ?v127 = getattr(space_0, ('wrap')) >> [translation:ERROR] ? ? ? ?v128 = getattr(self_0, ('get_pkttype')) >> [translation:ERROR] ? ? ? ?v129 = simple_call(v128) >> [translation:ERROR] ? ? ? ?v130 = simple_call(v127, v129) >> [translation:ERROR] ? ? ? ?v131 = getattr(space_0, ('wrap')) >> [translation:ERROR] ? ? ? ?v132 = getattr(self_0, ('get_hatype')) >> [translation:ERROR] ? ? ? ?v133 = simple_call(v132) >> [translation:ERROR] ? ? ? ?v134 = simple_call(v131, v133) >> [translation:ERROR] ? ? ? ?v135 = getattr(space_0, ('wrap')) >> [translation:ERROR] ? ? ? ?v136 = getattr(self_0, ('get_addr')) >> [translation:ERROR] ? ? ? ?v137 = simple_call(v136) >> [translation:ERROR] ? ? ? ?v138 = simple_call(v135, v137) >> [translation:ERROR] ? ? ? ?v139 = newlist(v122, v126, v130, v134, v138) >> [translation:ERROR] ? ? ? ?v140 = simple_call(v120, v139) >> [translation:ERROR] ?--end-- >> >> If I understand the above error correctly, it's either saying >> get_ifname doesn't exist as an attribute (function) or that as_object >> doesn't exist. >> >> I'm not familiar with pypy's translation errors, so I'm uncertain how >> to interpret the above. >> >> Any assistance is appreciated, >> -- >> Shawn Walker > > Seems as_object does not exist. I suggest first running rsocket tests > and seeing why it's not there by reading the source code (you don't > have to translate for that). So I cd'd into 'pypy' from the source root, and ran: ./test_all.py rlib/test/test_rsocket.py Everything passes except one test which was skipped: rlib/test/test_rsocket.py:35: test_netlink_addr SKIPPED Looking at the source: 35 def test_netlink_addr(): 36 if getattr(rsocket, 'AF_NETLINK', None) is None: 37 py.test.skip('AF_NETLINK not supported.') That seems expected given my platform. Is there a way to do an incremental translate? Currently, every time I run translate, it does the whole thing again, which makes the whole fix/build cycle very slow. -Shawn From fijall at gmail.com Wed Oct 12 01:51:45 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 12 Oct 2011 01:51:45 +0200 Subject: [pypy-dev] pypy translation error In-Reply-To: References: Message-ID: On Wed, Oct 12, 2011 at 1:39 AM, Shawn Walker wrote: > On 11 October 2011 15:56, Maciej Fijalkowski wrote: >> On Wed, Oct 12, 2011 at 12:47 AM, Shawn Walker wrote: >>> Greetings, >>> >>> I've been working on getting pypy to build on Solaris 11 Express using gcc 4.5. >> >> Hi. In general Solaris is not a supported platform, so expect problems >> (we don't happen to have a buildbot or a maintainer), we can try to >> help however. > > Yes, that was apparent, but the help is appreciated. > > With some small tweaks most of pypy is compiling, so I think it > shouldn't take much. > > I've figured out a lot of other issues already. > >>> >>> I could use some help deciphering the following error I encountered >>> during translation using "python2.6 translate.py -Ojit": >>> >>> [translation:ERROR] Error: >>> [translation:ERROR] ?Traceback (most recent call last): >>> [translation:ERROR] ? ?File "translate.py", line 308, in main >>> [translation:ERROR] ? ? drv.proceed(goals) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, >>> in proceed >>> [translation:ERROR] ? ? return self._execute(goals, task_skip = >>> self._maybe_skip()) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", >>> line 116, in _execute >>> [translation:ERROR] ? ? res = self._do(goal, taskcallable, *args, **kwds) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, >>> in _do >>> [translation:ERROR] ? ? res = func() >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, >>> in task_annotate >>> [translation:ERROR] ? ? s = annotator.build_types(self.entry_point, >>> self.inputtypes) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 103, in build_types >>> [translation:ERROR] ? ? return self.build_graph_types(flowgraph, >>> inputcells, complete_now=complete_now) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 194, in build_graph_types >>> [translation:ERROR] ? ? self.complete() >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 250, in complete >>> [translation:ERROR] ? ? self.processblock(graph, block) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 448, in processblock >>> [translation:ERROR] ? ? self.flowin(graph, block) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 508, in flowin >>> [translation:ERROR] ? ? self.consider_op(block.operations[i]) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 710, in consider_op >>> [translation:ERROR] ? ? raise_nicer_exception(op, str(graph)) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 707, in consider_op >>> [translation:ERROR] ? ? resultcell = consider_meth(*argcells) >>> [translation:ERROR] ? ?File "<4151-codegen >>> /export/home/swalker/devel/pypy/pypy/annotation/annrpython.py:745>", >>> line 3, in consider_op_simple_call >>> [translation:ERROR] ? ? return arg.simple_call(*args) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line >>> 175, in simple_call >>> [translation:ERROR] ? ? return >>> obj.call(getbookkeeper().build_args("simple_call", args_s)) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/unaryop.py", line >>> 696, in call >>> [translation:ERROR] ? ? return bookkeeper.pbc_call(pbc, args) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/bookkeeper.py", line >>> 667, in pbc_call >>> [translation:ERROR] ? ? results.append(desc.pycall(schedule, args, >>> s_previous_result, op)) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >>> 798, in pycall >>> [translation:ERROR] ? ? return self.funcdesc.pycall(schedule, args, >>> s_previous_result, op) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >>> 283, in pycall >>> [translation:ERROR] ? ? result = self.specialize(inputcells, op) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >>> 279, in specialize >>> [translation:ERROR] ? ? return self.specializer(self, inputcells) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/specialize.py", line >>> 80, in default_specialize >>> [translation:ERROR] ? ? graph = funcdesc.cachedgraph(key, builder=builder) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >>> 237, in cachedgraph >>> [translation:ERROR] ? ? graph = self.buildgraph(alt_name, builder) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/description.py", line >>> 200, in buildgraph >>> [translation:ERROR] ? ? graph = translator.buildflowgraph(self.pyobj) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/translator.py", line >>> 77, in buildflowgraph >>> [translation:ERROR] ? ? graph = space.build_flow(func) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/objspace/flow/objspace.py", line >>> 279, in build_flow >>> [translation:ERROR] ? ? ec.build_flow() >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/objspace/flow/flowcontext.py", >>> line 273, in build_flow >>> [translation:ERROR] ? ? self.space.unwrap(e.get_w_value(self.space)))) >>> [translation:ERROR] ?Exception': found an operation that always raises >>> AttributeError: generated by a constant operation: ?getattr >>> [translation:ERROR] ? ? ?.. v119 = simple_call(v118, fd_0) >>> [translation:ERROR] ? ? ?.. '(pypy.rlib.rsocket:269)PacketAddress.as_object' >>> [translation:ERROR] Processing block: >>> [translation:ERROR] ?block at 3 is a >> 'pypy.objspace.flow.flowcontext.SpamBlock'> >>> [translation:ERROR] ?in (pypy.rlib.rsocket:269)PacketAddress.as_object >>> [translation:ERROR] ?containing the following operations: >>> [translation:ERROR] ? ? ? ?v120 = getattr(space_0, ('newtuple')) >>> [translation:ERROR] ? ? ? ?v121 = getattr(space_0, ('wrap')) >>> [translation:ERROR] ? ? ? ?v118 = getattr(self_0, ('get_ifname')) >>> [translation:ERROR] ? ? ? ?v119 = simple_call(v118, fd_0) >>> [translation:ERROR] ? ? ? ?v122 = simple_call(v121, v119) >>> [translation:ERROR] ? ? ? ?v123 = getattr(space_0, ('wrap')) >>> [translation:ERROR] ? ? ? ?v124 = getattr(self_0, ('get_protocol')) >>> [translation:ERROR] ? ? ? ?v125 = simple_call(v124) >>> [translation:ERROR] ? ? ? ?v126 = simple_call(v123, v125) >>> [translation:ERROR] ? ? ? ?v127 = getattr(space_0, ('wrap')) >>> [translation:ERROR] ? ? ? ?v128 = getattr(self_0, ('get_pkttype')) >>> [translation:ERROR] ? ? ? ?v129 = simple_call(v128) >>> [translation:ERROR] ? ? ? ?v130 = simple_call(v127, v129) >>> [translation:ERROR] ? ? ? ?v131 = getattr(space_0, ('wrap')) >>> [translation:ERROR] ? ? ? ?v132 = getattr(self_0, ('get_hatype')) >>> [translation:ERROR] ? ? ? ?v133 = simple_call(v132) >>> [translation:ERROR] ? ? ? ?v134 = simple_call(v131, v133) >>> [translation:ERROR] ? ? ? ?v135 = getattr(space_0, ('wrap')) >>> [translation:ERROR] ? ? ? ?v136 = getattr(self_0, ('get_addr')) >>> [translation:ERROR] ? ? ? ?v137 = simple_call(v136) >>> [translation:ERROR] ? ? ? ?v138 = simple_call(v135, v137) >>> [translation:ERROR] ? ? ? ?v139 = newlist(v122, v126, v130, v134, v138) >>> [translation:ERROR] ? ? ? ?v140 = simple_call(v120, v139) >>> [translation:ERROR] ?--end-- >>> >>> If I understand the above error correctly, it's either saying >>> get_ifname doesn't exist as an attribute (function) or that as_object >>> doesn't exist. >>> >>> I'm not familiar with pypy's translation errors, so I'm uncertain how >>> to interpret the above. >>> >>> Any assistance is appreciated, >>> -- >>> Shawn Walker >> >> Seems as_object does not exist. I suggest first running rsocket tests >> and seeing why it's not there by reading the source code (you don't >> have to translate for that). > > So I cd'd into 'pypy' from the source root, and ran: > > ./test_all.py rlib/test/test_rsocket.py > > Everything passes except one test which was skipped: > > rlib/test/test_rsocket.py:35: test_netlink_addr SKIPPED > > Looking at the source: > > ?35 def test_netlink_addr(): > ?36 ? ? if getattr(rsocket, 'AF_NETLINK', None) is None: > ?37 ? ? ? ? py.test.skip('AF_NETLINK not supported.') > > That seems expected given my platform. > > Is there a way to do an incremental translate? ?Currently, every time > I run translate, it does the whole thing again, which makes the whole > fix/build cycle very slow. > > -Shawn > No, there is no such thing :( I believe one of those operations inside get_ifname does not work, like ifreq is not there? (can you import it from _rsocket_rffi?). For now you can possibly disable the entire PacketAddress class declaration and it should compile From fijall at gmail.com Wed Oct 12 02:52:24 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 12 Oct 2011 02:52:24 +0200 Subject: [pypy-dev] pypy translation error In-Reply-To: References: Message-ID: On Wed, Oct 12, 2011 at 2:47 AM, Shawn Walker wrote: > On 11 October 2011 16:51, Maciej Fijalkowski wrote: >> On Wed, Oct 12, 2011 at 1:39 AM, Shawn Walker wrote: > ... >>> Is there a way to do an incremental translate? ?Currently, every time >>> I run translate, it does the whole thing again, which makes the whole >>> fix/build cycle very slow. >>> >>> -Shawn >>> >> >> No, there is no such thing :( >> >> I believe one of those operations inside get_ifname does not work, >> like ifreq is not there? (can you import it from _rsocket_rffi?). For >> now you can possibly disable the entire PacketAddress class >> declaration and it should compile > > ifreq is definitely defined, but from what I'm reading, this is a > common issue with things that make assumptions about socket structures > if AF_PACKET is defined. > > Regardless, I simply conditionally disabled the entire PacketAddress > class for now. > > However, that leads me to: > > [translation:ERROR] Error: > [translation:ERROR] ?Traceback (most recent call last): > [translation:ERROR] ? ?File "translate.py", line 308, in main > [translation:ERROR] ? ? drv.proceed(goals) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, > in proceed > [translation:ERROR] ? ? return self._execute(goals, task_skip = > self._maybe_skip()) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", > line 116, in _execute > [translation:ERROR] ? ? res = self._do(goal, taskcallable, *args, **kwds) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, > in _do > [translation:ERROR] ? ? res = func() > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, > in task_annotate > [translation:ERROR] ? ? s = annotator.build_types(self.entry_point, > self.inputtypes) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 103, in build_types > [translation:ERROR] ? ? return self.build_graph_types(flowgraph, > inputcells, complete_now=complete_now) > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 194, in build_graph_types > [translation:ERROR] ? ? self.complete() > [translation:ERROR] ? ?File > "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line > 272, in complete > [translation:ERROR] ? ? raise AnnotatorError(text) > [translation:ERROR] ?AnnotatorError: > [translation:ERROR] -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > [translation:ERROR] Blocked block -- operation cannot succeed > [translation:ERROR] ?v119 = getattr(v118, ('set_interrupt')) > [translation:ERROR] In (pypy.module.cpyext.pyerrors:311)PyErr_SetInterrupt at 0x10475cac>: > [translation:ERROR] Happened at file > /export/home/swalker/devel/pypy/pypy/module/cpyext/pyerrors.py line > 316 > [translation:ERROR] > [translation:ERROR] ==> ? ? space.check_signal_action.set_interrupt() > [translation:ERROR] > [translation:ERROR] Known variable annotations: > [translation:ERROR] ?v118 = SomePBC(can_be_None=True, const=None, > subset_of=None) > > "Blocked block" ... errr? > > I'm guessing the signal handling isn't working as pypy expects somehow. > > Thoughts? Yes, apparently. You can run some tests, but also try to compile without the cpyext module. ./translate.py -Ojit targetpypystandalone.py --withoutmod-cpyext. I wonder though why check_signal_action is None. Try running tests in pypy/modules/signal? From binarycrusader at gmail.com Wed Oct 12 02:47:44 2011 From: binarycrusader at gmail.com (Shawn Walker) Date: Tue, 11 Oct 2011 17:47:44 -0700 Subject: [pypy-dev] pypy translation error In-Reply-To: References: Message-ID: On 11 October 2011 16:51, Maciej Fijalkowski wrote: > On Wed, Oct 12, 2011 at 1:39 AM, Shawn Walker wrote: ... >> Is there a way to do an incremental translate? ?Currently, every time >> I run translate, it does the whole thing again, which makes the whole >> fix/build cycle very slow. >> >> -Shawn >> > > No, there is no such thing :( > > I believe one of those operations inside get_ifname does not work, > like ifreq is not there? (can you import it from _rsocket_rffi?). For > now you can possibly disable the entire PacketAddress class > declaration and it should compile ifreq is definitely defined, but from what I'm reading, this is a common issue with things that make assumptions about socket structures if AF_PACKET is defined. Regardless, I simply conditionally disabled the entire PacketAddress class for now. However, that leads me to: [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "translate.py", line 308, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", line 116, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, in _do [translation:ERROR] res = func() [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, in task_annotate [translation:ERROR] s = annotator.build_types(self.entry_point, self.inputtypes) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 103, in build_types [translation:ERROR] return self.build_graph_types(flowgraph, inputcells, complete_now=complete_now) [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 194, in build_graph_types [translation:ERROR] self.complete() [translation:ERROR] File "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line 272, in complete [translation:ERROR] raise AnnotatorError(text) [translation:ERROR] AnnotatorError: [translation:ERROR] -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ [translation:ERROR] Blocked block -- operation cannot succeed [translation:ERROR] v119 = getattr(v118, ('set_interrupt')) [translation:ERROR] In : [translation:ERROR] Happened at file /export/home/swalker/devel/pypy/pypy/module/cpyext/pyerrors.py line 316 [translation:ERROR] [translation:ERROR] ==> space.check_signal_action.set_interrupt() [translation:ERROR] [translation:ERROR] Known variable annotations: [translation:ERROR] v118 = SomePBC(can_be_None=True, const=None, subset_of=None) "Blocked block" ... errr? I'm guessing the signal handling isn't working as pypy expects somehow. Thoughts? -- Shawn Walker From binarycrusader at gmail.com Wed Oct 12 06:44:58 2011 From: binarycrusader at gmail.com (Shawn Walker) Date: Tue, 11 Oct 2011 21:44:58 -0700 Subject: [pypy-dev] pypy translation error In-Reply-To: References: Message-ID: On 11 October 2011 17:52, Maciej Fijalkowski wrote: > On Wed, Oct 12, 2011 at 2:47 AM, Shawn Walker wrote: >> On 11 October 2011 16:51, Maciej Fijalkowski wrote: >>> On Wed, Oct 12, 2011 at 1:39 AM, Shawn Walker wrote: >> ... >>>> Is there a way to do an incremental translate? ?Currently, every time >>>> I run translate, it does the whole thing again, which makes the whole >>>> fix/build cycle very slow. >>>> >>>> -Shawn >>>> >>> >>> No, there is no such thing :( >>> >>> I believe one of those operations inside get_ifname does not work, >>> like ifreq is not there? (can you import it from _rsocket_rffi?). For >>> now you can possibly disable the entire PacketAddress class >>> declaration and it should compile >> >> ifreq is definitely defined, but from what I'm reading, this is a >> common issue with things that make assumptions about socket structures >> if AF_PACKET is defined. >> >> Regardless, I simply conditionally disabled the entire PacketAddress >> class for now. >> >> However, that leads me to: >> >> [translation:ERROR] Error: >> [translation:ERROR] ?Traceback (most recent call last): >> [translation:ERROR] ? ?File "translate.py", line 308, in main >> [translation:ERROR] ? ? drv.proceed(goals) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, >> in proceed >> [translation:ERROR] ? ? return self._execute(goals, task_skip = >> self._maybe_skip()) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", >> line 116, in _execute >> [translation:ERROR] ? ? res = self._do(goal, taskcallable, *args, **kwds) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, >> in _do >> [translation:ERROR] ? ? res = func() >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, >> in task_annotate >> [translation:ERROR] ? ? s = annotator.build_types(self.entry_point, >> self.inputtypes) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 103, in build_types >> [translation:ERROR] ? ? return self.build_graph_types(flowgraph, >> inputcells, complete_now=complete_now) >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 194, in build_graph_types >> [translation:ERROR] ? ? self.complete() >> [translation:ERROR] ? ?File >> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >> 272, in complete >> [translation:ERROR] ? ? raise AnnotatorError(text) >> [translation:ERROR] ?AnnotatorError: >> [translation:ERROR] -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >> [translation:ERROR] Blocked block -- operation cannot succeed >> [translation:ERROR] ?v119 = getattr(v118, ('set_interrupt')) >> [translation:ERROR] In > (pypy.module.cpyext.pyerrors:311)PyErr_SetInterrupt at 0x10475cac>: >> [translation:ERROR] Happened at file >> /export/home/swalker/devel/pypy/pypy/module/cpyext/pyerrors.py line >> 316 >> [translation:ERROR] >> [translation:ERROR] ==> ? ? space.check_signal_action.set_interrupt() >> [translation:ERROR] >> [translation:ERROR] Known variable annotations: >> [translation:ERROR] ?v118 = SomePBC(can_be_None=True, const=None, >> subset_of=None) >> >> "Blocked block" ... errr? >> >> I'm guessing the signal handling isn't working as pypy expects somehow. >> >> Thoughts? > > Yes, apparently. You can run some tests, but also try to compile > without the cpyext module. ./translate.py -Ojit > targetpypystandalone.py --withoutmod-cpyext. I wonder though why > check_signal_action is None. Try running tests in pypy/modules/signal? All of the tests in pypy/modules/signal pass. This time the translation succeeded by using the without-cpymodext option. However, compilation failed because the 'makedev', 'major', and 'minor' symbols are undefined. But I think I can figure that one out myself. -- Shawn Walker From fijall at gmail.com Wed Oct 12 09:57:59 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 12 Oct 2011 09:57:59 +0200 Subject: [pypy-dev] pypy translation error In-Reply-To: References: Message-ID: On Wed, Oct 12, 2011 at 6:44 AM, Shawn Walker wrote: > On 11 October 2011 17:52, Maciej Fijalkowski wrote: >> On Wed, Oct 12, 2011 at 2:47 AM, Shawn Walker wrote: >>> On 11 October 2011 16:51, Maciej Fijalkowski wrote: >>>> On Wed, Oct 12, 2011 at 1:39 AM, Shawn Walker wrote: >>> ... >>>>> Is there a way to do an incremental translate? ?Currently, every time >>>>> I run translate, it does the whole thing again, which makes the whole >>>>> fix/build cycle very slow. >>>>> >>>>> -Shawn >>>>> >>>> >>>> No, there is no such thing :( >>>> >>>> I believe one of those operations inside get_ifname does not work, >>>> like ifreq is not there? (can you import it from _rsocket_rffi?). For >>>> now you can possibly disable the entire PacketAddress class >>>> declaration and it should compile >>> >>> ifreq is definitely defined, but from what I'm reading, this is a >>> common issue with things that make assumptions about socket structures >>> if AF_PACKET is defined. >>> >>> Regardless, I simply conditionally disabled the entire PacketAddress >>> class for now. >>> >>> However, that leads me to: >>> >>> [translation:ERROR] Error: >>> [translation:ERROR] ?Traceback (most recent call last): >>> [translation:ERROR] ? ?File "translate.py", line 308, in main >>> [translation:ERROR] ? ? drv.proceed(goals) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 809, >>> in proceed >>> [translation:ERROR] ? ? return self._execute(goals, task_skip = >>> self._maybe_skip()) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/tool/taskengine.py", >>> line 116, in _execute >>> [translation:ERROR] ? ? res = self._do(goal, taskcallable, *args, **kwds) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 286, >>> in _do >>> [translation:ERROR] ? ? res = func() >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/translator/driver.py", line 323, >>> in task_annotate >>> [translation:ERROR] ? ? s = annotator.build_types(self.entry_point, >>> self.inputtypes) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 103, in build_types >>> [translation:ERROR] ? ? return self.build_graph_types(flowgraph, >>> inputcells, complete_now=complete_now) >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 194, in build_graph_types >>> [translation:ERROR] ? ? self.complete() >>> [translation:ERROR] ? ?File >>> "/export/home/swalker/devel/pypy/pypy/annotation/annrpython.py", line >>> 272, in complete >>> [translation:ERROR] ? ? raise AnnotatorError(text) >>> [translation:ERROR] ?AnnotatorError: >>> [translation:ERROR] -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >>> [translation:ERROR] Blocked block -- operation cannot succeed >>> [translation:ERROR] ?v119 = getattr(v118, ('set_interrupt')) >>> [translation:ERROR] In >> (pypy.module.cpyext.pyerrors:311)PyErr_SetInterrupt at 0x10475cac>: >>> [translation:ERROR] Happened at file >>> /export/home/swalker/devel/pypy/pypy/module/cpyext/pyerrors.py line >>> 316 >>> [translation:ERROR] >>> [translation:ERROR] ==> ? ? space.check_signal_action.set_interrupt() >>> [translation:ERROR] >>> [translation:ERROR] Known variable annotations: >>> [translation:ERROR] ?v118 = SomePBC(can_be_None=True, const=None, >>> subset_of=None) >>> >>> "Blocked block" ... errr? >>> >>> I'm guessing the signal handling isn't working as pypy expects somehow. >>> >>> Thoughts? >> >> Yes, apparently. You can run some tests, but also try to compile >> without the cpyext module. ./translate.py -Ojit >> targetpypystandalone.py --withoutmod-cpyext. I wonder though why >> check_signal_action is None. Try running tests in pypy/modules/signal? > > All of the tests in pypy/modules/signal pass. > > This time the translation succeeded by using the without-cpymodext option. > > However, compilation failed because the 'makedev', 'major', and > 'minor' symbols are undefined. ?But I think I can figure that one out > myself. > > -- > Shawn Walker > Btw Feel free to submit your patches to the bugtracker. Even though solaris is unsupported platform, that should not force the next guy to jump through all the hoops. Cheers, fijal From bea at changemaker.nu Wed Oct 12 10:18:22 2011 From: bea at changemaker.nu (Bea During) Date: Wed, 12 Oct 2011 10:18:22 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> Message-ID: <4E954D4E.7020008@changemaker.nu> Hi there Maciej Fijalkowski skrev 2011-10-11 21:41: > On Tue, Oct 11, 2011 at 9:14 PM, Bea During wrote: >> Hi there >> >> Antonio Cuni skrev 2011-10-11 17:01: >>> On 13/09/11 13:35, Carl Friedrich Bolz wrote: >>> >>>> Some of us need to be in Stockholm Oct 24 and 28. >>>> Anto needs to be with his family Nov 1. >>>> Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. >>>> >>>> Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd >>>> through >>>> Thursday Nov 10. >>>> >>>> fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be >>>> speaking >>>> yet. >>>> >>>> What do the rest of you think of this idea? >>> >>> Did we decide anything wrt the Gothenburg sprint in November? I think >>> it's time booking the flights, else the might become expensive. >>> >>> We should decide the dates and publish the sprint announcement. >>> >>> Probably, I'll be able to come only on the 4th, and then I might stay >>> until the 13th, to attend fscons. >>> >>> ciao, >>> Anto >> I talked to Laura about this today and she wanted me to email the following >> and start to push for a sprint announcement (which apparently not needed >> since the discussion has been reboote): >> >> - the sprint need to take place at the Open End office due to bathroom >> logistic angst in Lauras/Jacobs house (apparently all apartements need to >> revamp some of the plumbing which collides with the sprint) >> So we need to add directions to Open End facilities to the sprint >> announcement: Open End AB, Norra ?gatan 10, 416 64 G?teborg, see >> http://kartor.eniro.se/m/ajEHO >> >> - suggested dates that we suggest is as above: Wednesday 2nd of Nov to >> Thursday 10th of Nov >> >> - due to the logistical mess of the bathroom situation at Jacob and Lauras >> place they will not be able to host people sleeping over other in the same >> manner as they usually do (Anto - there is a bed for you as promised >> though). So we need to recommend some cheap places for booking >> accommodations as well (Chalmers Studenthem) >> >> - as for topics, it seems Maciej suggests a 1.7 release before the sprint so >> we need to check the todo list >> >> Cheers >> >> Bea >> > Hi > > Maybe with logistics being harder we should move the sprint to some > cheaper place? (I do appreciate OE lending us a room though) > > Cheers, > fijal With cheaper place do you mean another country/town than Gothenburg? If so I would like to remind about FSCons as well as the possibility of doing a potential presentation for Vinnova in end of October. What did you have in mind? Cheers Bea > > > -- > > CronLab scanned this message. We don't think it was spam. If it was, > please report by copying this link into your browser: http://cronlab01.terratel.se/mail/index.php?id=0312B2E21A4.B6563-&learn=spam&host=212.91.134.155 > From anto.cuni at gmail.com Wed Oct 12 12:04:43 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 12 Oct 2011 12:04:43 +0200 Subject: [pypy-dev] [pypy-commit] pypy py3k: Add a modifiable copy of opcode.py In-Reply-To: <20111011212716.C42DD82112@wyvern.cs.uni-duesseldorf.de> References: <20111011212716.C42DD82112@wyvern.cs.uni-duesseldorf.de> Message-ID: <4E95663B.8010707@gmail.com> On 11/10/11 23:27, amauryfa wrote: > Author: Amaury Forgeot d'Arc > Branch: py3k > Changeset: r47947:838f7aaf8802 > Date: 2011-10-11 23:12 +0200 > http://bitbucket.org/pypy/pypy/changeset/838f7aaf8802/ > > Log: Add a modifiable copy of opcode.py > > diff --git a/lib-python/3.2/opcode.py b/lib-python/modified-3.2/opcode.py > copy from lib-python/3.2/opcode.py > copy to lib-python/modified-3.2/opcode.py what about killing the * vs modified-* thing while we are at it? With mercurial it should be very easy to get a diff against the original one whenever we want. ciao, Anto From fijall at gmail.com Wed Oct 12 12:14:12 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 12 Oct 2011 12:14:12 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E954D4E.7020008@changemaker.nu> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> Message-ID: On Wed, Oct 12, 2011 at 10:18 AM, Bea During wrote: > Hi there > > Maciej Fijalkowski skrev 2011-10-11 21:41: >> >> On Tue, Oct 11, 2011 at 9:14 PM, Bea During ?wrote: >>> >>> Hi there >>> >>> Antonio Cuni skrev 2011-10-11 17:01: >>>> >>>> On 13/09/11 13:35, Carl Friedrich Bolz wrote: >>>> >>>>> Some of us need to be in Stockholm Oct 24 and 28. >>>>> Anto needs to be with his family Nov 1. >>>>> Fscons starts Friday Nov 11 in G?teborg, and we're giving a talk. >>>>> >>>>> Thus I propose that we hold a Sprint at my house Wednesday Nov 2nd >>>>> through >>>>> Thursday Nov 10. >>>>> >>>>> fscons: http://fscons.org/ Nov 11-13 Not sure what day we will be >>>>> speaking >>>>> yet. >>>>> >>>>> What do the rest of you think of this idea? >>>> >>>> Did we decide anything wrt the Gothenburg sprint in November? ?I think >>>> it's time booking the flights, else the might become expensive. >>>> >>>> We should decide the dates and publish the sprint announcement. >>>> >>>> Probably, I'll be able to come only on the 4th, and then I might stay >>>> until the 13th, to attend fscons. >>>> >>>> ciao, >>>> Anto >>> >>> I talked to Laura about this today and she wanted me to email the >>> following >>> and start to push for a sprint announcement (which apparently not needed >>> since the discussion has been reboote): >>> >>> - the sprint need to take place at the Open End office due to bathroom >>> logistic angst in Lauras/Jacobs house (apparently all apartements need to >>> revamp some of the plumbing which collides with the sprint) >>> So we need to add directions to Open End facilities to the sprint >>> announcement: Open End AB, Norra ?gatan 10, 416 64 G?teborg, see >>> http://kartor.eniro.se/m/ajEHO >>> >>> - suggested dates that we suggest is as above: Wednesday 2nd of Nov to >>> Thursday 10th of Nov >>> >>> - due to the logistical mess of the bathroom situation at Jacob and >>> Lauras >>> place they will not be able to host people sleeping over other in the >>> same >>> manner as they usually do (Anto - there is a bed for you as promised >>> though). So we need to recommend some cheap places for booking >>> accommodations as well (Chalmers Studenthem) >>> >>> - as for topics, it seems Maciej suggests a 1.7 release before the sprint >>> so >>> we need to check the todo list >>> >>> Cheers >>> >>> Bea >>> >> Hi >> >> Maybe with logistics being harder we should move the sprint to some >> cheaper place? (I do appreciate OE lending us a room though) >> >> Cheers, >> fijal > > With cheaper place do you mean another country/town than Gothenburg? If so I > would like to remind about FSCons as well as the possibility of doing a > potential presentation for Vinnova in end of October. > > What did you have in mind? Indeed, so quite a few people will be around anyway. Well, so I'm not coming, too expensive. Cheers, fijal From anto.cuni at gmail.com Wed Oct 12 13:15:54 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 12 Oct 2011 13:15:54 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> Message-ID: <4E9576EA.5040608@gmail.com> On 12/10/11 12:14, Maciej Fijalkowski wrote: > Indeed, so quite a few people will be around anyway. > > Well, so I'm not coming, too expensive. we can always use the "general pypy pot" to fund you, can't we? From holger at merlinux.eu Wed Oct 12 12:18:06 2011 From: holger at merlinux.eu (holger krekel) Date: Wed, 12 Oct 2011 10:18:06 +0000 Subject: [pypy-dev] [pypy-commit] pypy py3k: Add a modifiable copy of opcode.py In-Reply-To: <4E95663B.8010707@gmail.com> References: <20111011212716.C42DD82112@wyvern.cs.uni-duesseldorf.de> <4E95663B.8010707@gmail.com> Message-ID: <20111012101806.GN1684@merlinux.eu> On Wed, Oct 12, 2011 at 12:04 +0200, Antonio Cuni wrote: > On 11/10/11 23:27, amauryfa wrote: > >Author: Amaury Forgeot d'Arc > >Branch: py3k > >Changeset: r47947:838f7aaf8802 > >Date: 2011-10-11 23:12 +0200 > >http://bitbucket.org/pypy/pypy/changeset/838f7aaf8802/ > > > >Log: Add a modifiable copy of opcode.py > > > >diff --git a/lib-python/3.2/opcode.py b/lib-python/modified-3.2/opcode.py > >copy from lib-python/3.2/opcode.py > >copy to lib-python/modified-3.2/opcode.py > > > what about killing the * vs modified-* thing while we are at it? > With mercurial it should be very easy to get a diff against the > original one whenever we want. +1. We could make a tag for the original version i guess such that "hg diff -r cpy271 ..." etc. would work. Maybe also a little overview script that lists all modified files and number of changed lines or so. holger From fijall at gmail.com Wed Oct 12 12:28:16 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 12 Oct 2011 12:28:16 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <4E9576EA.5040608@gmail.com> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> <4E9576EA.5040608@gmail.com> Message-ID: On Wed, Oct 12, 2011 at 1:15 PM, Antonio Cuni wrote: > On 12/10/11 12:14, Maciej Fijalkowski wrote: > >> Indeed, so quite a few people will be around anyway. >> >> Well, so I'm not coming, too expensive. > > we can always use the "general pypy pot" to fund you, can't we? > It's not like we even discussed what the general pypy pot will be used for. I think sprint funding is fine, but what are other people opinions? Cheers, fijal From lac at openend.se Wed Oct 12 14:31:34 2011 From: lac at openend.se (Laura Creighton) Date: Wed, 12 Oct 2011 14:31:34 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: Message from Maciej Fijalkowski of "Tue, 11 Oct 2011 21:42:54 +0200." References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> Message-ID: <201110121231.p9CCVYNj010913@theraft.openend.se> Just to clarify: I am fine with all of you eating at my house and cooking together if that is what is required. It is just the 'sleeping at my place' part which is temporarily broken, because I don't have any bathing facilities to offer you (showers and bathtub). The laundry and toilets are fine. It just sounded like a good idea to have a sprint in Gbg at this time because of the fscons and vinnova connection at this time. If people want to sprint someplace else, I am fine with that too. Laura, who cannot even begin to tell you how much she is sorry about this ... From arigo at tunes.org Wed Oct 12 22:04:08 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 12 Oct 2011 22:04:08 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> <4E9576EA.5040608@gmail.com> Message-ID: Hi Maciej, On Wed, Oct 12, 2011 at 12:28, Maciej Fijalkowski wrote: > It's not like we even discussed what the general pypy pot will be used > for. I think sprint funding is fine, but what are other people > opinions? Of course I agree. Note that 6 nights at Chalmers Studenthem is about ~120 Euros per person in a double room, if I remember correctly. If the problem is that this is still too much, then (as far as I can tell) it should not be an issue to arrange sprint funding for you. We already did this occasionally for other people, btw. A bient?t, Armin. From holger at merlinux.eu Thu Oct 13 06:44:18 2011 From: holger at merlinux.eu (holger krekel) Date: Thu, 13 Oct 2011 04:44:18 +0000 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> <4E9576EA.5040608@gmail.com> Message-ID: <20111013044418.GS1684@merlinux.eu> On Wed, Oct 12, 2011 at 22:04 +0200, Armin Rigo wrote: > Hi Maciej, > > On Wed, Oct 12, 2011 at 12:28, Maciej Fijalkowski wrote: > > It's not like we even discussed what the general pypy pot will be used > > for. I think sprint funding is fine, but what are other people > > opinions? > > Of course I agree. Note that 6 nights at Chalmers Studenthem is about > ~120 Euros per person in a double room, if I remember correctly. If > the problem is that this is still too much, then (as far as I can > tell) it should not be an issue to arrange sprint funding for you. We > already did this occasionally for other people, btw. right, we already agreed before that pypy developers can get travel funding for coming to sprints. holger From lac at openend.se Thu Oct 13 09:43:59 2011 From: lac at openend.se (Laura Creighton) Date: Thu, 13 Oct 2011 09:43:59 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: Message from Armin Rigo of "Wed, 12 Oct 2011 22:04:08 +0200." References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> <4E9576EA.5040608@gmail.com> Message-ID: <201110130743.p9D7hxqr003003@theraft.openend.se> In a message of Wed, 12 Oct 2011 22:04:08 +0200, Armin Rigo writes: >Hi Maciej, > >On Wed, Oct 12, 2011 at 12:28, Maciej Fijalkowski wrot >e: >> It's not like we even discussed what the general pypy pot will be used >> for. I think sprint funding is fine, but what are other people >> opinions? > >Of course I agree. Note that 6 nights at Chalmers Studenthem is about >~120 Euros per person in a double room, if I remember correctly. If >the problem is that this is still too much, then (as far as I can >tell) it should not be an issue to arrange sprint funding for you. We >already did this occasionally for other people, btw. > > >A bient??t, > >Armin. And, again as always, I think that sprint funding is the very best use we can make of our money, as a general principle. Laura From anto.cuni at gmail.com Thu Oct 13 10:55:50 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 13 Oct 2011 10:55:50 +0200 Subject: [pypy-dev] How to organize the py3k branch (was: [pypy-commit] pypy py3k: Remove print statement) In-Reply-To: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> Message-ID: <4E96A796.4060202@gmail.com> Hi Amaury, hi all, On 12/10/11 22:23, amauryfa wrote: > Author: Amaury Forgeot d'Arc > Branch: py3k > Changeset: r47978:36b998dd9966 > Date: 2011-10-12 01:52 +0200 > http://bitbucket.org/pypy/pypy/changeset/36b998dd9966/ > > Log: Remove print statement uhm... I thought that the idea was to have support for python 2 and 3 from the same codebase, while from your commits it seems that your are actually destryoing support for python 2. I think that there are several possible ways to organize the source code, each one with pros and cons. Maybe we should organize an IRC meeting with the interested people to decide which direction to follow? I.e., a list of possibilities (everyone feels free to add more): - have a branch where we have *only* python3, and rely on hg merge to get the improvements to the translator (which is what you are doing right now, I think). It's easier to write, but then it might be hard to keep in sync with the default branch - support for py2 and py3 in the same branch, with minimal duplication of code. This would mean that e.g. in ast.py you would have tons of "if py2: enable_print_stmt()", etc. Personally, I think that the codebase would become too cluttered. - support for py2 and py3 in the same branch, with some duplication of code; e.g., we could copy the existing interpreter/ into interpreter/py3k and modify it there. While we are at it, we should try hard to minimize the code duplication, but then it's up to us to decide when it's better to duplicate or when it's better to share the code between the twos. - other? ciao, Anto From bea at changemaker.nu Thu Oct 13 10:09:45 2011 From: bea at changemaker.nu (Bea During) Date: Thu, 13 Oct 2011 10:09:45 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: <20111013044418.GS1684@merlinux.eu> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> <4E9576EA.5040608@gmail.com> <20111013044418.GS1684@merlinux.eu> Message-ID: <4E969CC9.8090904@changemaker.nu> Hi there holger krekel skrev 2011-10-13 06:44: > On Wed, Oct 12, 2011 at 22:04 +0200, Armin Rigo wrote: >> Hi Maciej, >> >> On Wed, Oct 12, 2011 at 12:28, Maciej Fijalkowski wrote: >>> It's not like we even discussed what the general pypy pot will be used >>> for. I think sprint funding is fine, but what are other people >>> opinions? >> Of course I agree. Note that 6 nights at Chalmers Studenthem is about >> ~120 Euros per person in a double room, if I remember correctly. If >> the problem is that this is still too much, then (as far as I can >> tell) it should not be an issue to arrange sprint funding for you. We >> already did this occasionally for other people, btw. > right, we already agreed before that pypy developers can get travel > funding for coming to sprints. > > holger Yes indeed. And in light of the latest discussions about funding for py3k and numpy we should consider this sprint as a kick off of those efforts so we need the people there who will commit to work on these efforts as well as other core developers to give input and help with input/start up discussions. Could we try to get a sprint announcement out that actually states this it would be a great way to show that we are not dragging our feet on these initiatives. Cheers Bea From lac at openend.se Thu Oct 13 10:25:02 2011 From: lac at openend.se (Laura Creighton) Date: Thu, 13 Oct 2011 10:25:02 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: Message from Bea During of "Thu, 13 Oct 2011 10:09:45 +0200." <4E969CC9.8090904@changemaker.nu> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> <4E9576EA.5040608@gmail.com> <20111013044418.GS1684@merlinux.eu> <4E969CC9.8090904@changemaker.nu> Message-ID: <201110130825.p9D8P2Qq007711@theraft.openend.se> In a message of Thu, 13 Oct 2011 10:09:45 +0200, Bea During writes: >Hi there > >holger krekel skrev 2011-10-13 06:44: >> On Wed, Oct 12, 2011 at 22:04 +0200, Armin Rigo wrote: >>> Hi Maciej, >>> >>> On Wed, Oct 12, 2011 at 12:28, Maciej Fijalkowski w >rote: >>>> It's not like we even discussed what the general pypy pot will be use >d >>>> for. I think sprint funding is fine, but what are other people >>>> opinions? >>> Of course I agree. Note that 6 nights at Chalmers Studenthem is about >>> ~120 Euros per person in a double room, if I remember correctly. If >>> the problem is that this is still too much, then (as far as I can >>> tell) it should not be an issue to arrange sprint funding for you. We >>> already did this occasionally for other people, btw. >> right, we already agreed before that pypy developers can get travel >> funding for coming to sprints. >> >> holger >Yes indeed. And in light of the latest discussions about funding for >py3k and numpy we should consider this sprint as a kick off of those >efforts so we need the people there who will commit to work on these >efforts as well as other core developers to give input and help with >input/start up discussions. > >Could we try to get a sprint announcement out that actually states this >it would be a great way to show that we are not dragging our feet on >these initiatives. > >Cheers > >Bea Are we still decided that we are going to have one here, now that Maciej's funding problem is settled? Laura From arigo at tunes.org Thu Oct 13 11:43:29 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 13 Oct 2011 11:43:29 +0200 Subject: [pypy-dev] How to organize the py3k branch (was: [pypy-commit] pypy py3k: Remove print statement) In-Reply-To: <4E96A796.4060202@gmail.com> References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> Message-ID: Hi Amaury, > r47983 be8493f31b60 py3k | amauryfa | 2011-10-12 22:19 +0200 > > pypy/module/sys/system.py > pypy/module/sys/version.py > pypy/module/sys/vm.py > > Fix some metaclasses, and the sys module can now be imported Please wait a second before doing all these changes. You are changing the RPython code to be Python 3. Doing so is Yet Another option that we never really discussed so far: moving RPython to be "RPython 3". I suppose that, by now, we should really consider this as another possible option too; but we must definitely consider what it implies. It seems to imply changing the whole of PyPy (including the whole translation toolchain) to be Python 3. But we cannot drop the Python 2 version. So it seems to me that we'll end up with two versions of the whole translation toolchain (and I'll leave you the merging pains). That sounds bad. A bient?t, Armin. From arigo at tunes.org Thu Oct 13 12:05:16 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 13 Oct 2011 12:05:16 +0200 Subject: [pypy-dev] How to organize the py3k branch (was: [pypy-commit] pypy py3k: Remove print statement) In-Reply-To: References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> Message-ID: Re-Hi, On Thu, Oct 13, 2011 at 11:43, Armin Rigo wrote: > Please wait a second before doing all these changes. ?You are changing > the RPython code to be Python 3. Ah, sorry. Carl Friedrich pointed out that it also works in Python 2.7. Are you actually fixing the RPython code of the PyPy interpreter to work both on Python 2.7 and in Python 3.x? (Looks like a pain too, but a more bearable one.) A bient?t, Armin. From william.leslie.ttg at gmail.com Thu Oct 13 12:47:45 2011 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Thu, 13 Oct 2011 21:47:45 +1100 Subject: [pypy-dev] How to organize the py3k branch (was: [pypy-commit] pypy py3k: Remove print statement) In-Reply-To: <4E96A796.4060202@gmail.com> References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> Message-ID: On 13 October 2011 19:55, Antonio Cuni wrote: > - support for py2 and py3 in the same branch, with minimal duplication of > code. ?This would mean that e.g. in ast.py you would have tons of "if py2: > enable_print_stmt()", etc. ?Personally, I think that the codebase would > become too cluttered. In order to support __future__.print_function, you already need this case. Do we have a good idea of where we expect the biggest divergence? Is it things that now return iterables or views, or is it IO? Is it new-style defaults, or unicode field names? > - support for py2 and py3 in the same branch, with some duplication of code; > e.g., we could copy the existing interpreter/ into interpreter/py3k and > modify it there. ?While we are at it, we should try hard to minimize the > code duplication, but then it's up to us to decide when it's better to > duplicate or when it's better to share the code between the twos. An adaptive approach sounds very sensible. -- William Leslie From lac at openend.se Thu Oct 13 16:19:54 2011 From: lac at openend.se (Laura Creighton) Date: Thu, 13 Oct 2011 16:19:54 +0200 Subject: [pypy-dev] Gothenburg Sprint Dates In-Reply-To: Message from Bea During of "Thu, 13 Oct 2011 10:09:45 +0200." <4E969CC9.8090904@changemaker.nu> References: <4E6F3FEF.5080600@gmx.de> <4E945A31.80702@gmail.com> <4E94957C.50108@changemaker.nu> <4E954D4E.7020008@changemaker.nu> <4E9576EA.5040608@gmail.com> <20111013044418.GS1684@merlinux.eu> <4E969CC9.8090904@changemaker.nu> Message-ID: <201110131419.p9DEJsPH013883@theraft.openend.se> First draft in extradoc. Please add the more details of the topics we want to discuss, and decide whether you want to do the sprinting at my house or at OE. I am fine with either. I don't think there will be any actual construction happening in the house at this time. If I am wrong, I will let you know as soon as I do. Laura From amauryfa at gmail.com Thu Oct 13 16:37:28 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 13 Oct 2011 16:37:28 +0200 Subject: [pypy-dev] How to organize the py3k branch (was: [pypy-commit] pypy py3k: Remove print statement) In-Reply-To: References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> Message-ID: 2011/10/13 Armin Rigo : >> r47983 be8493f31b60 py3k | amauryfa | 2011-10-12 22:19 +0200 >> >> ? ? pypy/module/sys/system.py >> ? ? pypy/module/sys/version.py >> ? ? pypy/module/sys/vm.py >> >> Fix some metaclasses, and the sys module can now be imported > > Please wait a second before doing all these changes. ?You are changing > the RPython code to be Python 3. ?Doing so is Yet Another option that > we never really discussed so far: moving RPython to be "RPython 3". ?I > suppose that, by now, we should really consider this as another > possible option too; but we must definitely consider what it implies. All these changes occur in strings that start with app = gateway.applevel(''' "NOT_RPYTHON" This code is not RPython, and is processed by the new compiler. Normally, the host python should not see this code. Anyway, I run cpython2.6 for my tests, which does not allow this new metaclass syntax. I agree that RPython should stay at version 2.x for the near future. For example, space.wrap("hello") takes a 8bit string and produces a W_Unicode... Thanks for looking at all this! -- Amaury Forgeot d'Arc From chef at ghum.de Thu Oct 13 16:56:00 2011 From: chef at ghum.de (Massa, Harald Armin) Date: Thu, 13 Oct 2011 16:56:00 +0200 Subject: [pypy-dev] Donation websites Message-ID: I am very happy to finally see an amount-feedback with numbers and bars for py3k on the website ! Thank you so much! Could that also happen for the numpy-funding? As much as I know, 1200 british pounds have allready been pledged, which shall be 1880 USD... Best wishes, Harald -- GHUM GmbH Harald Armin Massa Spielberger Stra?e 49 70435 Stuttgart 0173/9409607 Amtsgericht Stuttgart, HRB 734971 - persuadere. et programmare From fijall at gmail.com Thu Oct 13 17:48:38 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 13 Oct 2011 17:48:38 +0200 Subject: [pypy-dev] Donation websites In-Reply-To: References: Message-ID: On Thu, Oct 13, 2011 at 4:56 PM, Massa, Harald Armin wrote: > I am very happy to finally see an amount-feedback with numbers and > bars for py3k on the website ! Thank you so much! > > Could that also happen for the numpy-funding? As much as I know, 1200 > british pounds have allready been pledged, which shall be 1880 USD... > > Best wishes, > > Harald Hi Harald. I only update numbers when they arrive on the account and current process makes it possible to do roughly weekly, hence don't expect anything in the next 7 days. It's possible to do otherwise, but it's however work and unless someone steps up, it's not gonna happen. Cheers, fijal From chef at ghum.de Thu Oct 13 17:55:37 2011 From: chef at ghum.de (Massa, Harald Armin) Date: Thu, 13 Oct 2011 17:55:37 +0200 Subject: [pypy-dev] Donation websites In-Reply-To: References: Message-ID: fijal, > I only update numbers when they arrive on the account and current > process makes it possible to do roughly weekly, hence don't expect > anything in the next 7 days. It's possible to do otherwise, but it's > however work and unless someone steps up, it's not gonna happen. thanks for the information. Is there an api to query the account? How can I step up? Harald -- GHUM GmbH Harald Armin Massa Spielberger Stra?e 49 70435 Stuttgart 0173/9409607 Amtsgericht Stuttgart, HRB 734971 - persuadere. et programmare From fijall at gmail.com Thu Oct 13 17:57:22 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 13 Oct 2011 17:57:22 +0200 Subject: [pypy-dev] Donation websites In-Reply-To: References: Message-ID: On Thu, Oct 13, 2011 at 5:55 PM, Massa, Harald Armin wrote: > fijal, > >> I only update numbers when they arrive on the account and current >> process makes it possible to do roughly weekly, hence don't expect >> anything in the next 7 days. It's possible to do otherwise, but it's >> however work and unless someone steps up, it's not gonna happen. > > thanks for the information. Is there an api to query the account? How > can I step up? > > Harald > There is no API to query the account, but you can get paypal or google checkout to get back to you with POST. From arigo at tunes.org Thu Oct 13 18:03:40 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 13 Oct 2011 18:03:40 +0200 Subject: [pypy-dev] How to organize the py3k branch (was: [pypy-commit] pypy py3k: Remove print statement) In-Reply-To: References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> Message-ID: Hi Amaury, On Thu, Oct 13, 2011 at 16:37, Amaury Forgeot d'Arc wrote: > All these changes occur in strings that start with > ? app = gateway.applevel(''' Oups! Indeed. Sorry, I mistook that. A bient?t, Armin. From anto.cuni at gmail.com Fri Oct 14 14:40:36 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 14 Oct 2011 14:40:36 +0200 Subject: [pypy-dev] How to organize the py3k branch In-Reply-To: References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> Message-ID: <4E982DC4.9080607@gmail.com> Hi Amaury, On 13/10/11 16:37, Amaury Forgeot d'Arc wrote: > > All these changes occur in strings that start with > app = gateway.applevel(''' > "NOT_RPYTHON" > > This code is not RPython, and is processed by the new compiler. > Normally, the host python should not see this code. > > Anyway, I run cpython2.6 for my tests, which does not > allow this new metaclass syntax. > > I agree that RPython should stay at version 2.x for the near future. > For example, space.wrap("hello") takes a 8bit string and produces a W_Unicode... > > Thanks for looking at all this! that's true, but from the commits I saw it seems that you are destroying support for the Python 2 interpreter anyway. Is it correct? I renew my suggestion of having a meeting where to decide which strategy to follow. Who would like to participate at the meeting? ciao, Anto From amauryfa at gmail.com Fri Oct 14 14:07:39 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 14 Oct 2011 14:07:39 +0200 Subject: [pypy-dev] How to organize the py3k branch In-Reply-To: <4E982DC4.9080607@gmail.com> References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> <4E982DC4.9080607@gmail.com> Message-ID: Hi, 2011/10/14 Antonio Cuni > that's true, but from the commits I saw it seems that you are destroying > support for the Python 2 interpreter anyway. Is it correct? > Yes, that's true. It seemed to me that supporting both versions in the same files would be too much of a hassle. I'd prefer regularly merge branches, conflicts should be limited since the 2.7 version won't grow new Python features. But we could revisit this, I just would like to avoid tons of #ifdef around the changes... > I renew my suggestion of having a meeting where to decide which strategy > to follow. > Unfortunately I'm travelling early this afternoon, and I'm not sure to have a connection during the week-end. It will be easier next week, for example around 17:00 CEST. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Fri Oct 14 16:54:17 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 14 Oct 2011 16:54:17 +0200 Subject: [pypy-dev] How to organize the py3k branch In-Reply-To: References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> <4E982DC4.9080607@gmail.com> Message-ID: <4E984D19.3090504@gmail.com> Hi, On 14/10/11 14:07, Amaury Forgeot d'Arc wrote: > Yes, that's true. It seemed to me that supporting both versions in the same > files would be too much of a hassle. > I'd prefer regularly merge branches, conflicts should be limited since the 2.7 > version won't grow new Python features. yes, I see the point. I'm not saying that this is the wrong approach, just that it's probably better to think a bit about it and decide which direction to follow. > But we could revisit this, I just would like to avoid tons of #ifdef around > the changes... I agree with you that having tons of #ifdef should be avoided. > Unfortunately I'm travelling early this afternoon, and I'm not sure to have a > connection during the week-end. > It will be easier next week, for example around 17:00 CEST. what about monday at 17:00 CEST? From samuel.vaiter at gmail.com Fri Oct 14 17:59:29 2011 From: samuel.vaiter at gmail.com (Samuel Vaiter) Date: Fri, 14 Oct 2011 17:59:29 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] Message-ID: Hi, I am willing to contribute to PyPy, especially on Numpy port. The main reason why Numpy is my main interest is that as Ph.D student in Applied Mathematics, I really hope one day we will be able to perform numerical computation without using heavy binding in C/Fortran or intermediate solution like Cython. I didn't contribute to any opensource project yet, and I may have to learn some conventions. However, I'm a regular user of DCVS. Looking at the source code and the dev mailing list, it's seems a big refactoring is currently done on the numpy branch. Is there any small fixes/features easy enough to implement for a newbie on the topic ? Samuel From fijall at gmail.com Fri Oct 14 20:37:38 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 14 Oct 2011 20:37:38 +0200 Subject: [pypy-dev] How to organize the py3k branch In-Reply-To: <4E984D19.3090504@gmail.com> References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> <4E982DC4.9080607@gmail.com> <4E984D19.3090504@gmail.com> Message-ID: On Fri, Oct 14, 2011 at 4:54 PM, Antonio Cuni wrote: > Hi, > > On 14/10/11 14:07, Amaury Forgeot d'Arc wrote: > >> Yes, that's true. It seemed to me that supporting both versions in the >> same >> files would be too much of a hassle. >> I'd prefer regularly merge branches, conflicts should be limited since the >> 2.7 >> version won't grow new Python features. > > yes, I see the point. ?I'm not saying that this is the wrong approach, just > that it's probably better to think a bit about it and decide which direction > to follow. > >> But we could revisit this, I just would like to avoid tons of #ifdef >> around >> the changes... > > I agree with you that having tons of #ifdef should be avoided. > >> Unfortunately I'm travelling early this afternoon, and I'm not sure to >> have a >> connection during the week-end. >> It will be easier next week, for example around 17:00 CEST. > > what about monday at 17:00 CEST? If you want me there it has to be later (preferably after 7pm) > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From anto.cuni at gmail.com Sat Oct 15 18:50:15 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sat, 15 Oct 2011 18:50:15 +0200 Subject: [pypy-dev] How to organize the py3k branch In-Reply-To: References: <20111012202315.3C2B982112@wyvern.cs.uni-duesseldorf.de> <4E96A796.4060202@gmail.com> <4E982DC4.9080607@gmail.com> <4E984D19.3090504@gmail.com> Message-ID: <4E99B9C7.2050502@gmail.com> On 14/10/11 20:37, Maciej Fijalkowski wrote: >> what about monday at 17:00 CEST? > > If you want me there it has to be later (preferably after 7pm) does the "later" applies only on monday or also on the other days? From fijall at gmail.com Sun Oct 16 09:52:13 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 16 Oct 2011 09:52:13 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Fri, Oct 14, 2011 at 5:59 PM, Samuel Vaiter wrote: > Hi, > > I am willing to contribute to PyPy, especially on Numpy port. The main > reason why Numpy is my main interest is that as Ph.D student in > Applied Mathematics, I really hope one day we will be able to perform > numerical computation without using heavy binding in C/Fortran or > intermediate solution like Cython. > I didn't contribute to any opensource project yet, and I may have to > learn some conventions. However, I'm a regular user of DCVS. Hi! It's great to hear from you! > > Looking at the source code and the dev mailing list, it's seems a big > refactoring is currently done on the numpy branch. Is there any small > fixes/features easy enough to implement for a newbie on the topic ? > > Samuel Yes, there are definitely small things that you can work on. A good start would be to pick a missing feature from numpy and start implementing it. There is usually someone on IRC who can help if you have some immediate questions. Do you want me to find you some small thing? Cheers, fijal From stefan_ml at behnel.de Sun Oct 16 14:29:33 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 16 Oct 2011 14:29:33 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: Samuel Vaiter, 14.10.2011 17:59: > The main > reason why Numpy is my main interest is that as Ph.D student in > Applied Mathematics, I really hope one day we will be able to perform > numerical computation without using heavy binding in C/Fortran or > intermediate solution like Cython. I guess you didn't mean it that way, but "intermediate solution" makes it sound like you expect any of these to go away one day. They sure won't. Manually optimised C and Fortran code will always beat JIT compilers, especially in numerics. It's a game they can't win - whenever JIT compilers get too close to hand optimised code, someone will come along and write better code. Stefan From fijall at gmail.com Sun Oct 16 17:50:45 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 16 Oct 2011 17:50:45 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 2:29 PM, Stefan Behnel wrote > Samuel Vaiter, 14.10.2011 17:59: >> >> The main >> reason why Numpy is my main interest is that as Ph.D student in >> Applied Mathematics, I really hope one day we will be able to perform >> numerical computation without using heavy binding in C/Fortran or >> intermediate solution like Cython. > > I guess you didn't mean it that way, but "intermediate solution" makes it > sound like you expect any of these to go away one day. They sure won't. > Manually optimised C and Fortran code will always beat JIT compilers, > especially in numerics. It's a game they can't win - whenever JIT compilers > get too close to hand optimised code, someone will come along and write > better code. > > Stefan I guess what you say is at best [citation needed]. We have proven already that we can perform several optimizations that are very hard to perform at the C level. And indeed, while you can always argue "well, you can just write a better compiler", it's true also for JITs. And we're only at the beginning of what we can do. One example that I have in mind is array expressions that depend on runtime - we can optimize them fairly well in the JIT (which means SSE and whatnot), but you just can't get the same thing in C, because you're unable to compile native code per a set of array operations. Cheers, fijal From alex.gaynor at gmail.com Sun Oct 16 17:54:19 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 16 Oct 2011 11:54:19 -0400 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 11:50 AM, Maciej Fijalkowski wrote: > On Sun, Oct 16, 2011 at 2:29 PM, Stefan Behnel wrote > > Samuel Vaiter, 14.10.2011 17:59: > >> > >> The main > >> reason why Numpy is my main interest is that as Ph.D student in > >> Applied Mathematics, I really hope one day we will be able to perform > >> numerical computation without using heavy binding in C/Fortran or > >> intermediate solution like Cython. > > > > I guess you didn't mean it that way, but "intermediate solution" makes it > > sound like you expect any of these to go away one day. They sure won't. > > Manually optimised C and Fortran code will always beat JIT compilers, > > especially in numerics. It's a game they can't win - whenever JIT > compilers > > get too close to hand optimised code, someone will come along and write > > better code. > > > > Stefan > > I guess what you say is at best [citation needed]. We have proven > already that we can perform several optimizations that are very hard > to perform at the C level. And indeed, while you can always argue > "well, you can just write a better compiler", it's true also for JITs. > And we're only at the beginning of what we can do. > > One example that I have in mind is array expressions that depend on > runtime - we can optimize them fairly well in the JIT (which means SSE > and whatnot), but you just can't get the same thing in C, because > you're unable to compile native code per a set of array operations. > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Another example, which no fortran compiler will ever be able to do, is if you create a ufunc from a Python function, you can still inline it into assembler that's emitted for an operation so: a + b * sin(my_ufunc(c)) still generates a single loop in assembler. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sun Oct 16 18:34:58 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 16 Oct 2011 18:34:58 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: Maciej Fijalkowski, 16.10.2011 17:50: > On Sun, Oct 16, 2011 at 2:29 PM, Stefan Behnel wrote >> Samuel Vaiter, 14.10.2011 17:59: >>> The main >>> reason why Numpy is my main interest is that as Ph.D student in >>> Applied Mathematics, I really hope one day we will be able to perform >>> numerical computation without using heavy binding in C/Fortran or >>> intermediate solution like Cython. >> >> I guess you didn't mean it that way, but "intermediate solution" makes it >> sound like you expect any of these to go away one day. They sure won't. >> Manually optimised C and Fortran code will always beat JIT compilers, >> especially in numerics. It's a game they can't win - whenever JIT compilers >> get too close to hand optimised code, someone will come along and write >> better code. > > I guess what you say is at best [citation needed]. Feel free to quote me. :D > We have proven > already that we can perform several optimizations that are very hard > to perform at the C level. And indeed, while you can always argue > "well, you can just write a better compiler", it's true also for JITs. I wasn't comparing a JIT to another compiler. I was comparing it to a human programmer. A JIT, just like any other compiler, will never be able to *understand* the code it compiles, and it can only apply the optimisations that it was taught. JITs are nice when you need performance quickly and don't care about the last few CPU cycles. However, there are cases where it's not very satisfactory to learn that your JIT compiler, in the current state that it has, can only get you up to, say, 90%, or even 95% of the speed that you need for your problem. In those cases where you do care about the last 5%, and numerics people care about them surprisingly often, you will eventually end up using a low-level language, usually C or Fortran, to make sure you get as much out of your code as possible. JIT compilers are structurally much harder to manually override than static compilers, and they are certainly not designed to help with the "but I know what I'm doing" cases. Mind you, I'm not saying that JIT compilers aren't capable of generating surprisingly fast code. They certainly are, and what they deliver is often "good enough". I'm just saying that statically compiled low-level languages will *always* have a raison d'?tre, if only to deliver support for the "I know what I'm doing" cases. Stefan From samuel.vaiter at gmail.com Sun Oct 16 18:41:43 2011 From: samuel.vaiter at gmail.com (Samuel Vaiter) Date: Sun, 16 Oct 2011 18:41:43 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: Hi, Thanks for you answer. > Yes, there are definitely small things that you can work on. > > A good start would be to pick a missing feature from numpy and start > implementing it. There is usually someone on IRC who can help if you > have some immediate questions. > > Do you want me to find you some small thing? > Yeah, it might be a good thing for a start to have a "tutor" if you have the time to think about it. @Stefan : I agree. By "intermediate solution" I mean the total time : time to think about data structure + time to implement + time to execute. I use Numpy almost all the time as a tool to do "exploration". I don't mind to have the fastest execution time, I only look after the total time ;) Cython is a great tool, but 90% of my issues are index-related : Numpy and Matlab are slow when you loop over the index of your array. And I think PyPy-Numpy (will) provides a much better solution in MY case. Regards, Samuel From nicolas.hureau at gmail.com Sun Oct 16 11:00:49 2011 From: nicolas.hureau at gmail.com (Nicolas Hureau) Date: Sun, 16 Oct 2011 11:00:49 +0200 Subject: [pypy-dev] Anyone interested in a MIPS port... In-Reply-To: References: Message-ID: Hello everyone, On 24 August 2011 15:14, Armin Rigo wrote: > Sven and David are currently working on the PowerPC backend in the > branch "ppc-jit-backend", if you want to follow; it is still at an > early stage, which means that the amount of code so far should be > reasonable. > > Sorry to answer this late, it seems that nobody is very much > interested in contributing... ?All I can promise myself is to give you > some help, as I do right now with Sven. :-) I'm also interested in contributing to a MIPS port of the JIT backend. Is there any tip for cross-compiling PyPy that I should know about before beginning ? Thanks, -- Nicolas ??kalenz?? Hureau From fijall at gmail.com Sun Oct 16 20:01:17 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 16 Oct 2011 20:01:17 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 6:34 PM, Stefan Behnel wrote: > Maciej Fijalkowski, 16.10.2011 17:50: >> >> On Sun, Oct 16, 2011 at 2:29 PM, Stefan Behnel wrote >>> >>> Samuel Vaiter, 14.10.2011 17:59: >>>> >>>> The main >>>> reason why Numpy is my main interest is that as Ph.D student in >>>> Applied Mathematics, I really hope one day we will be able to perform >>>> numerical computation without using heavy binding in C/Fortran or >>>> intermediate solution like Cython. >>> >>> I guess you didn't mean it that way, but "intermediate solution" makes it >>> sound like you expect any of these to go away one day. They sure won't. >>> Manually optimised C and Fortran code will always beat JIT compilers, >>> especially in numerics. It's a game they can't win - whenever JIT >>> compilers >>> get too close to hand optimised code, someone will come along and write >>> better code. >> >> I guess what you say is at best [citation needed]. > > Feel free to quote me. :D > > >> We have proven >> already that we can perform several optimizations that are very hard >> to perform at the C level. And indeed, while you can always argue >> "well, you can just write a better compiler", it's true also for JITs. > > I wasn't comparing a JIT to another compiler. I was comparing it to a human > programmer. A JIT, just like any other compiler, will never be able to > *understand* the code it compiles, and it can only apply the optimisations > that it was taught. JITs are nice when you need performance quickly and > don't care about the last few CPU cycles. However, there are cases where > it's not very satisfactory to learn that your JIT compiler, in the current > state that it has, can only get you up to, say, 90%, or even 95% of the > speed that you need for your problem. In those cases where you do care about > the last 5%, and numerics people care about them surprisingly often, you > will eventually end up using a low-level language, usually C or Fortran, to > make sure you get as much out of your code as possible. JIT compilers are > structurally much harder to manually override than static compilers, and > they are certainly not designed to help with the "but I know what I'm doing" > cases. I just claim you're wrong here and there are cases where you can't beat the JIT compiler, precisely because some stuff depends on runtime data and you can't encode all the possibilities in a statically compiled code (at least in theory). Granted, you might want to have an access to emitting assembler on the fly and do it better than a compiler, but sometimes you need a way to emit assembler on the fly. > > Mind you, I'm not saying that JIT compilers aren't capable of generating > surprisingly fast code. They certainly are, and what they deliver is often > "good enough". I'm just saying that statically compiled low-level languages > will *always* have a raison d'?tre, if only to deliver support for the "I > know what I'm doing" cases. We still have to implement JITs in something :) From fijall at gmail.com Sun Oct 16 20:03:04 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 16 Oct 2011 20:03:04 +0200 Subject: [pypy-dev] Anyone interested in a MIPS port... In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 11:00 AM, Nicolas Hureau wrote: > Hello everyone, > > On 24 August 2011 15:14, Armin Rigo wrote: >> Sven and David are currently working on the PowerPC backend in the >> branch "ppc-jit-backend", if you want to follow; it is still at an >> early stage, which means that the amount of code so far should be >> reasonable. >> >> Sorry to answer this late, it seems that nobody is very much >> interested in contributing... ?All I can promise myself is to give you >> some help, as I do right now with Sven. :-) > > I'm also interested in contributing to a MIPS port of the JIT backend. > > Is there any tip for cross-compiling PyPy that I should know about > before beginning ? > > Thanks, Cross compiling pypy is a bit hairy because PyPy (at translation time) queries underlaying Python enviroment for some details. You might be able to get rid of that, but it's a bit of work. We're here to help though. And no, there are no cross-compiler tools I'm aware of. Cheers, fijal From fijall at gmail.com Sun Oct 16 20:03:50 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 16 Oct 2011 20:03:50 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 6:41 PM, Samuel Vaiter wrote: > Hi, > > Thanks for you answer. > >> Yes, there are definitely small things that you can work on. >> >> A good start would be to pick a missing feature from numpy and start >> implementing it. There is usually someone on IRC who can help if you >> have some immediate questions. >> >> Do you want me to find you some small thing? >> > Yeah, it might be a good thing for a start to have a "tutor" if you > have the time to think about it. > Hi I'm on holiday now but maybe I can think about something. Alex: any idea? Cheers, fijal From alex.gaynor at gmail.com Sun Oct 16 20:06:58 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 16 Oct 2011 14:06:58 -0400 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 2:03 PM, Maciej Fijalkowski wrote: > On Sun, Oct 16, 2011 at 6:41 PM, Samuel Vaiter > wrote: > > Hi, > > > > Thanks for you answer. > > > >> Yes, there are definitely small things that you can work on. > >> > >> A good start would be to pick a missing feature from numpy and start > >> implementing it. There is usually someone on IRC who can help if you > >> have some immediate questions. > >> > >> Do you want me to find you some small thing? > >> > > Yeah, it might be a good thing for a start to have a "tutor" if you > > have the time to think about it. > > > > Hi > > I'm on holiday now but maybe I can think about something. Alex: any idea? > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Adding new ufuncs is a great introductory task to numpy. I'm not sure which ones we're missing, but I'm sure we are :) You could also add Python-ufunc support, that is the ability to create a ufunc from a Python function. This shouldn't be too difficult. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sun Oct 16 20:31:53 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 16 Oct 2011 20:31:53 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: Maciej Fijalkowski, 16.10.2011 20:01: > On Sun, Oct 16, 2011 at 6:34 PM, Stefan Behnel wrote: >> Maciej Fijalkowski, 16.10.2011 17:50: >>> We have proven >>> already that we can perform several optimizations that are very hard >>> to perform at the C level. And indeed, while you can always argue >>> "well, you can just write a better compiler", it's true also for JITs. >> >> I wasn't comparing a JIT to another compiler. I was comparing it to a human >> programmer. A JIT, just like any other compiler, will never be able to >> *understand* the code it compiles, and it can only apply the optimisations >> that it was taught. JITs are nice when you need performance quickly and >> don't care about the last few CPU cycles. However, there are cases where >> it's not very satisfactory to learn that your JIT compiler, in the current >> state that it has, can only get you up to, say, 90%, or even 95% of the >> speed that you need for your problem. In those cases where you do care about >> the last 5%, and numerics people care about them surprisingly often, you >> will eventually end up using a low-level language, usually C or Fortran, to >> make sure you get as much out of your code as possible. JIT compilers are >> structurally much harder to manually override than static compilers, and >> they are certainly not designed to help with the "but I know what I'm doing" >> cases. > > I just claim you're wrong here and there are cases where you can't > beat the JIT compiler, precisely because some stuff depends on runtime > data and you can't encode all the possibilities in a statically > compiled code (at least in theory). Regarding David's response, I agree that there are cases where JITs can help in limiting the code explosion that you'd get from statically generating all possible optimised cases for generic code. A JIT only needs to instantiate the cases that really exist at runtime. Obviously, that does not automatically mean that the JIT would generate code that is as fast or faster than what a programmer would write for *one* of the specific cases by tuning the code accordingly. It just means that it would generate better code *on average* when looking at the whole corpus, because it can simply (and quickly) adapt the code to more cases at need. If that's what the programmer wants depends on the use case. I see the advantage for that especially in library code that needs to deal with basically all cases efficiently, as David pointed out. Stefan From david.schneider at picle.org Sun Oct 16 20:41:14 2011 From: david.schneider at picle.org (David Schneider) Date: Sun, 16 Oct 2011 20:41:14 +0200 Subject: [pypy-dev] Anyone interested in a MIPS port... In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 20:03, Maciej Fijalkowski wrote: > On Sun, Oct 16, 2011 at 11:00 AM, Nicolas Hureau > wrote: > > Hello everyone, > > > > On 24 August 2011 15:14, Armin Rigo wrote: > >> Sven and David are currently working on the PowerPC backend in the > >> branch "ppc-jit-backend", if you want to follow; it is still at an > >> early stage, which means that the amount of code so far should be > >> reasonable. > >> > >> Sorry to answer this late, it seems that nobody is very much > >> interested in contributing... All I can promise myself is to give you > >> some help, as I do right now with Sven. :-) > > > > I'm also interested in contributing to a MIPS port of the JIT backend. > > > > Is there any tip for cross-compiling PyPy that I should know about > > before beginning ? > > > > Thanks, > > Cross compiling pypy is a bit hairy because PyPy (at translation time) > queries underlaying Python enviroment for some details. You might be > able to get rid of that, but it's a bit of work. We're here to help > though. And no, there are no cross-compiler tools I'm aware of. > > Cheers, > fijal > > Hi, I have been using the scratchbox2 http://freedesktop.org/wiki/Software/sbox2toolchain to cross-translate PyPy for ARM targeting the Ubuntu ARM port. There is bit of documentation about using it in the ARM branch at https://bitbucket.org/pypy/pypy/src/arm-backend-2/pypy/doc/arm.rst Maybe the information there is helpful for MIPS. Greetings, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Oct 16 19:13:28 2011 From: cournape at gmail.com (David Cournapeau) Date: Sun, 16 Oct 2011 18:13:28 +0100 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 5:34 PM, Stefan Behnel wrote: > > I wasn't comparing a JIT to another compiler. I was comparing it to a human > programmer. A JIT, just like any other compiler, will never be able to > *understand* the code it compiles, and it can only apply the optimisations > that it was taught. I don't understand your argument. There are *many* situations where the best time to make a decision on how to generate machine code is runtime, not offline compile time. There are many things in numpy that are very difficult to get fast because we can't know how to perform them without informations that are only known at runtime, e.g.: - non-trivial iteration. The neighborhood iterators we have in numpy are very slow because of the cost of function calls that can't really be bypassed unless you start to generate hundred or even more small functions for each special case (number of dimensions, is the stride value == item size, etc...) - anything that requires specialization on the type. A typical example is the sparse array code in scipy.sparse. Some code is C++ code that is templated on the contained value and index size. But because the generated code is so big, we need to restrain the available types, otherwise we would need to compile literally thousand of functions which are doing exactly the same. Even if most people don't use most of them. One area where JIT is not that useful is to replace existing fortran/c code. Not only I am doubtful about have JIT beating Intel compiler for linear algebra stuff, but more significantly, rewriting the existing codebase would take many man-years. Some of this code requires deep knowledge of the underlying computation, and there is also the issue of correctness in floating point code generation. Given that decade-old compilers get it wrong, I would expect pypy jit to have quite a few funky corner cases as well. cheers, David From david.schneider at picle.org Sun Oct 16 20:21:29 2011 From: david.schneider at picle.org (David Schneider) Date: Sun, 16 Oct 2011 20:21:29 +0200 Subject: [pypy-dev] Anyone interested in a MIPS port... In-Reply-To: References: Message-ID: <7D510742-744C-40E7-8041-82872DAA4B64@picle.org> On 16.10.2011, at 20:03, Maciej Fijalkowski wrote: > On Sun, Oct 16, 2011 at 11:00 AM, Nicolas Hureau > wrote: >> Hello everyone, >> >> On 24 August 2011 15:14, Armin Rigo wrote: >>> Sven and David are currently working on the PowerPC backend in the >>> branch "ppc-jit-backend", if you want to follow; it is still at an >>> early stage, which means that the amount of code so far should be >>> reasonable. >>> >>> Sorry to answer this late, it seems that nobody is very much >>> interested in contributing... All I can promise myself is to give you >>> some help, as I do right now with Sven. :-) >> >> I'm also interested in contributing to a MIPS port of the JIT backend. >> >> Is there any tip for cross-compiling PyPy that I should know about >> before beginning ? >> >> Thanks, > > Cross compiling pypy is a bit hairy because PyPy (at translation time) > queries underlaying Python enviroment for some details. You might be > able to get rid of that, but it's a bit of work. We're here to help > though. And no, there are no cross-compiler tools I'm aware of. > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From ian at ianozsvald.com Sun Oct 16 23:20:24 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Sun, 16 Oct 2011 22:20:24 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project Message-ID: Hi all. Hi Fijal, you tweeted in response to my https://twitter.com/#!/ianozsvald/status/124898441087299584 the other day. I met Travis Oliphant on Friday at the Enthought Cambridge office opening. Didrik Pinte and I mentioned that we'd offered ?600 each towards pypy+numpy integration. Travis had a few thoughts on the matter and this has left me in the position of not being sure of the full costs and benefits of the pypy+numpy project. The main position (held by Travis and several others - and 'this is as best as I remember it and I'm open to correction') was that porting just numpy could leave much of the scipy ecosystem separated from pypy as the numpy port wouldn't have the same (and maybe I'm getting details mixed up here) API so couldn't be compiled easily. I've bcc'd Travis and Didrik, maybe someone else can come and clear the position (and correct my inevitable errors). I use numpy and parts of scipy and haven't looked into pypy's specifics so I'm far too ignorant on the whole subject. I'd like to pose some questions: * how big is the scipy ecosystem beyond numpy? What's the rough line count for Python, C, Fortran etc that depends on numpy? * can these other libraries *easily* be compiled against a pypy+numpy port (and if not, why not?) * are there other routes for numpy work (e.g. refactoring the core numpy libs and separating the C-dependent interface away, opening the door to a side-by-side pypy interface) that might benefit both the CPython and pypy communities? I apologise for the above being rather vague, I'm hoping some of you can help clarify the pros and cons of whatever options are available. Cheers, Ian. -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From arigo at tunes.org Sun Oct 16 23:21:34 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 16 Oct 2011 23:21:34 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: Hi David, On Sun, Oct 16, 2011 at 19:13, David Cournapeau wrote: > (...) and there is also the issue > of correctness in floating point code generation. Given that > decade-old compilers get it wrong, I would expect pypy jit to have > quite a few funky corner cases as well. No, we should not have corner cases, because we don't go there at all. We know very well that rewriting operations on floats can slightly change their results, so we don't do it. In other words the JIT produces a sequence of residual operations that has bit-wise the same effect as the original sequence of Python operations. (More precisely, it seems that we only replace FLOAT_MUL(x, 1.0) by "x" and FLOAT_MUL(x, -1.0) by "-x", as well as simplify repeated FLOAT_NEG's and assume that FLOAT_MUL's are commutative. As far as I can tell these trivial optimizations are all bit-wise correct, at least on modern FPUs.) A bient?t, Armin. From alex.gaynor at gmail.com Sun Oct 16 23:25:23 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 16 Oct 2011 17:25:23 -0400 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 5:21 PM, Armin Rigo wrote: > Hi David, > > On Sun, Oct 16, 2011 at 19:13, David Cournapeau > wrote: > > (...) and there is also the issue > > of correctness in floating point code generation. Given that > > decade-old compilers get it wrong, I would expect pypy jit to have > > quite a few funky corner cases as well. > > No, we should not have corner cases, because we don't go there at all. > We know very well that rewriting operations on floats can slightly > change their results, so we don't do it. In other words the JIT > produces a sequence of residual operations that has bit-wise the same > effect as the original sequence of Python operations. > > (More precisely, it seems that we only replace FLOAT_MUL(x, 1.0) by > "x" and FLOAT_MUL(x, -1.0) by "-x", as well as simplify repeated > FLOAT_NEG's and assume that FLOAT_MUL's are commutative. As far as I > can tell these trivial optimizations are all bit-wise correct, at > least on modern FPUs.) > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > I should note that I wrote all of these operations by verifying that GCC would do them, as well as testing on obscure values. Note for example that we don't constant fold x + 0.0 (changes the sign of x at x== -0.0). ALex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Oct 17 00:10:02 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 17 Oct 2011 00:10:02 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: Hi, On Sun, Oct 16, 2011 at 23:41, David Cournapeau wrote: > Interesting to know. But then, wouldn't this limit the speed gains to > be expected from the JIT ? Yes, to some extent. It cannot give you the last bit of performance improvements you could expect from arithmetic optimizations, but (as usual) you get already the several-times improvements of e.g. removing the boxing and unboxing of float objects. Personally I'm wary of going down that path, because it means that the results we get could suddenly change their least significant digit(s) when the JIT kicks in. At least there are multiple tests in the standard Python test suite that would fail because of that. > And I am not sure I understand how you can "not go there" if you want > to vectorize code to use SIMD instruction sets ? I'll leave fijal to answer this question in detail :-) I suppose that the goal is first to use SIMD when explicitly requested in the RPython source, in the numpy code that operate on matrices; and not do the harder job of automatically unrolling and SIMD-ing loops containing Python float operations. But even the later could be done without giving up on the idea that all Python operations should be present in a bit-exact way (e.g. by using SIMD on 64-bit floats, not on 32-bit floats). A bient?t, Armin. From cournape at gmail.com Sun Oct 16 23:41:01 2011 From: cournape at gmail.com (David Cournapeau) Date: Sun, 16 Oct 2011 22:41:01 +0100 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 10:21 PM, Armin Rigo wrote: > Hi David, > > On Sun, Oct 16, 2011 at 19:13, David Cournapeau wrote: >> (...) and there is also the issue >> of correctness in floating point code generation. Given that >> decade-old compilers get it wrong, I would expect pypy jit to have >> quite a few funky corner cases as well. > > No, we should not have corner cases, because we don't go there at all. > ?We know very well that rewriting operations on floats can slightly > change their results, so we don't do it. ?In other words the JIT > produces a sequence of residual operations that has bit-wise the same > effect as the original sequence of Python operations. > > (More precisely, it seems that we only replace FLOAT_MUL(x, 1.0) by > "x" and FLOAT_MUL(x, -1.0) by "-x", as well as simplify repeated > FLOAT_NEG's and assume that FLOAT_MUL's are commutative. ?As far as I > can tell these trivial optimizations are all bit-wise correct, at > least on modern FPUs.) Interesting to know. But then, wouldn't this limit the speed gains to be expected from the JIT ? And I am not sure I understand how you can "not go there" if you want to vectorize code to use SIMD instruction sets ? David From cournape at gmail.com Mon Oct 17 00:01:56 2011 From: cournape at gmail.com (David Cournapeau) Date: Sun, 16 Oct 2011 23:01:56 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Hi Ian, On Sun, Oct 16, 2011 at 10:20 PM, Ian Ozsvald wrote: > > I'd like to pose some questions: > * how big is the scipy ecosystem beyond numpy? What's the rough line > count for Python, C, Fortran etc that depends on numpy? The ecosystem is pretty big. There are at least in the order of hundred of packages that depend directly on numpy and scipy. For scipy alone, the raw count is around 150k-300k LOC (it is a bit hard to estimate because we include some swig-generated code that I have ignored here, and some code duplication to deal with distutils insanity). There is around 80k LOC of fortran alone in there. More and more scientific code use cython for speed or just for interfacing with C (and recently C++). Other tools have been used for similar reasons (f2py, in particular, to automatically wrap fortran and C). f2py at least is quite tightly coupled to numpy C API. I know there is work for a pypy-friendly backend for cython, but I don't know where things are there. I would like to see less C boilerplate code in scipy, and more cython usage (which generates faster code and is much more maitainable); this can also benefit pypy, if only for making the scipy code less dependend on CPython details. One thing I have little doubt about is that pypy needs a "story" to makes wrapping of fortran/c/c++ libraries easy, because otherwise few people in the scientific community will be interested. For better or worse, there are tens of millions of lines of code written in those languages, and a lot of them domain specific (you will not write a decent FFT code without knowing a lot about its implementation details, same for large eigen values problems). There needs to be some automatic wrappers generators. Scipy alone easily wraps thousand if not more functions written in fortran. cheers, David From vsapre80 at gmail.com Mon Oct 17 08:21:47 2011 From: vsapre80 at gmail.com (Vishal) Date: Mon, 17 Oct 2011 11:51:47 +0530 Subject: [pypy-dev] Fwd: Anyone interested in a MIPS port... In-Reply-To: References: Message-ID: Forwarding it again to pypy-dev. Sorry missed the first time. Regards, Vishal ---------- Forwarded message ---------- From: Vishal Date: Sun, Oct 16, 2011 at 10:51 PM Subject: Re: [pypy-dev] Anyone interested in a MIPS port... To: Armin Rigo Thanks Armin. Appreicate your email a lot. I had sort of lost track of this message. My final target is actually a 32 bit microcontroller from Microchip, the very famous PIC32 family. RAM is a big issue there...atleast I dont know what is the exact way of expanding ram on that architecture. MIPS though is available from many other companies, and has been the embedded workhorse of the last decade. Will be happy to help to someone who would take the lead in this effort. The python-on-a-chip project is more what microcontroller guys should look at. Take care, Vishal On Wed, Aug 24, 2011 at 6:44 PM, Armin Rigo wrote: > Hi Vishal, > > On Sat, Aug 20, 2011 at 3:53 PM, Vishal wrote: > > a) Does it make sense to have a MIPS port of the PyPy JIT. > > Yes, it definitely makes sense. I assume that the MIPS machines you > consider as final targets have *some* amount of RAM, like, say, > minimum 32MB or 64MB. PyPy would have issues running on smaller > machines, let alone with the JIT. > > > b) How much hardware dependent is a JIT port? > > You need to write pypy/jit/backend/mips/, similar to the other > existing JIT backends: x86 (the only one nightly tested), ARM or > PowerPC. This is the only hardware-dependent part (not e.g. the JIT > front-end): it receives a list of operations (generic operations > represented as nice objects, like "integer addition" and "read this > field from that pointer") and must turn it into machine code. > > Sven and David are currently working on the PowerPC backend in the > branch "ppc-jit-backend", if you want to follow; it is still at an > early stage, which means that the amount of code so far should be > reasonable. > > Sorry to answer this late, it seems that nobody is very much > interested in contributing... All I can promise myself is to give you > some help, as I do right now with Sven. :-) > > > A bient?t, > > Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Oct 17 09:28:20 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 09:28:20 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > One thing I have little doubt about is that pypy needs a "story" to > makes wrapping of fortran/c/c++ libraries easy, because otherwise few > people in the scientific community will be interested. For better or > worse, there are tens of millions of lines of code written in those > languages, and a lot of them domain specific (you will not write a > decent FFT code without knowing a lot about its implementation > details, same for large eigen values problems). There needs to be some > automatic wrappers generators. Scipy alone easily wraps thousand if > not more functions written in fortran. Yes, we're well aware of that and we don't plan to rewrite this existing codebase. As of now you can relatively easily call C/fortran from RPython and compile it together with PyPy. While PyPy does not have a C API, arrays are still "raw memory" which means you can pass pointers to the underlaying C libraries. We don't have (yet?) automatic binding generation but this is for later. The main thing is that we want to provide something immediately useful. That is a numpy which maybe does not integrate (yet) with the entire ecosystem, but is much faster on both array computations and pure python iterations, ufuncs etc. This would be very useful since you don't need to use Cython or any other things like this to provide working code and it already caters for some group of people. Cheers, fijal From chef at ghum.de Mon Oct 17 09:31:54 2011 From: chef at ghum.de (Massa, Harald Armin) Date: Mon, 17 Oct 2011 09:31:54 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > The main thing is that we want to provide something immediately > useful. That is a numpy which maybe does not integrate (yet) with the > entire ecosystem, but is much faster on both array computations and > pure python iterations, ufuncs etc. from a motivational perspective this is also quite important: having numpypy running at great speads will draw the braincapital to get that wrapperstuff programmed. Harald -- GHUM GmbH Harald Armin Massa Spielberger Stra?e 49 70435 Stuttgart 0173/9409607 Amtsgericht Stuttgart, HRB 734971 - persuadere. et programmare From fijall at gmail.com Mon Oct 17 09:34:42 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 09:34:42 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 12:10 AM, Armin Rigo wrote: > Hi, > > On Sun, Oct 16, 2011 at 23:41, David Cournapeau wrote: >> Interesting to know. But then, wouldn't this limit the speed gains to >> be expected from the JIT ? > > Yes, to some extent. ?It cannot give you the last bit of performance > improvements you could expect from arithmetic optimizations, but (as > usual) you get already the several-times improvements of e.g. removing > the boxing and unboxing of float objects. ?Personally I'm wary of > going down that path, because it means that the results we get could > suddenly change their least significant digit(s) when the JIT kicks > in. ?At least there are multiple tests in the standard Python test > suite that would fail because of that. The thing is that as with python there are scenarios where we can optimize a lot (like you said by doing type specialization or folding array operations or using multithreading based on runtime decisions) where we don't have to squeeze the last 2% of performance. This is the approach that worked great for optimizing Python so far (concentrate on the larger picture). > >> And I am not sure I understand how you can "not go there" if you want >> to vectorize code to use SIMD instruction sets ? > > I'll leave fijal to answer this question in detail :-) ?I suppose that > the goal is first to use SIMD when explicitly requested in the RPython > source, in the numpy code that operate on matrices; and not do the > harder job of automatically unrolling and SIMD-ing loops containing > Python float operations. ?But even the later could be done without > giving up on the idea that all Python operations should be present in > a bit-exact way (e.g. by using SIMD on 64-bit floats, not on 32-bit > floats). For now we restrict SIMD operations to explicit array arithmetics and we don't do automatic vectorization. We'll see later what we do with it :) Cheers, fijal From fijall at gmail.com Mon Oct 17 09:36:03 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 09:36:03 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 9:31 AM, Massa, Harald Armin wrote: >> The main thing is that we want to provide something immediately >> useful. That is a numpy which maybe does not integrate (yet) with the >> entire ecosystem, but is much faster on both array computations and >> pure python iterations, ufuncs etc. > > from a motivational perspective this is also quite important: having > numpypy running at great speads will draw the braincapital to get that > wrapperstuff programmed. > Yes, that's our secret plan (not any more) :) Cheers, fijal From stefan_ml at behnel.de Mon Oct 17 10:16:14 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 17 Oct 2011 10:16:14 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: Maciej Fijalkowski, 17.10.2011 09:34: > On Mon, Oct 17, 2011 at 12:10 AM, Armin Rigo wrote: >> On Sun, Oct 16, 2011 at 23:41, David Cournapeau wrote: >>> Interesting to know. But then, wouldn't this limit the speed gains to >>> be expected from the JIT ? >> >> Yes, to some extent. It cannot give you the last bit of performance >> improvements you could expect from arithmetic optimizations, but (as >> usual) you get already the several-times improvements of e.g. removing >> the boxing and unboxing of float objects. Personally I'm wary of >> going down that path, because it means that the results we get could >> suddenly change their least significant digit(s) when the JIT kicks >> in. At least there are multiple tests in the standard Python test >> suite that would fail because of that. > > The thing is that as with python there are scenarios where we can > optimize a lot (like you said by doing type specialization or folding > array operations or using multithreading based on runtime decisions) > where we don't have to squeeze the last 2% of performance. This is the > approach that worked great for optimizing Python so far (concentrate > on the larger picture). That's what I meant. It's not surprising that a JIT compiler can be faster than an interpreter, and it's not surprising that it can optimise generic code into several times faster specialised code. That's what JIT compilers are there for, and PyPy does a really good job at that. It's much harder to reach up to the performance of specialised, hand tuned code, though. And there is a lot of specialised, hand tuned code in SciPy and Sage, for example. That's a different kind of game than the "running generic Python code faster than CPython" business, however worthy that is by itself. Stefan From alex.pyattaev at gmail.com Mon Oct 17 10:17:01 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Mon, 17 Oct 2011 11:17:01 +0300 Subject: [pypy-dev] Success histories needed In-Reply-To: References: Message-ID: <1774912.BUcLc4JMg6@hunter-laptop.tontut.fi> I have a fully-functional wireless network simulation tool written in pypy+swig. Is that nice? Have couple papers to refer to as well. If you want I could write a small abstract on how it was so great to use pypy (which, in fact, was really great)? Alex. On Friday 07 October 2011 20:17:01 Maciej Fijalkowski wrote: > Hi > > We're gathering success stories for the website. Anyone feel like > providing name, company and some data how pypy worked for them (we > accept info how it didn't work on bugs.pypy.org all the time :) > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From fijall at gmail.com Mon Oct 17 10:30:13 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 10:30:13 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 10:16 AM, Stefan Behnel wrote: > Maciej Fijalkowski, 17.10.2011 09:34: >> >> On Mon, Oct 17, 2011 at 12:10 AM, Armin Rigo wrote: >>> >>> On Sun, Oct 16, 2011 at 23:41, David Cournapeau wrote: >>>> >>>> Interesting to know. But then, wouldn't this limit the speed gains to >>>> be expected from the JIT ? >>> >>> Yes, to some extent. ?It cannot give you the last bit of performance >>> improvements you could expect from arithmetic optimizations, but (as >>> usual) you get already the several-times improvements of e.g. removing >>> the boxing and unboxing of float objects. ?Personally I'm wary of >>> going down that path, because it means that the results we get could >>> suddenly change their least significant digit(s) when the JIT kicks >>> in. ?At least there are multiple tests in the standard Python test >>> suite that would fail because of that. >> >> The thing is that as with python there are scenarios where we can >> optimize a lot (like you said by doing type specialization or folding >> array operations or using multithreading based on runtime decisions) >> where we don't have to squeeze the last 2% of performance. This is the >> approach that worked great for optimizing Python so far (concentrate >> on the larger picture). > > That's what I meant. It's not surprising that a JIT compiler can be faster > than an interpreter, and it's not surprising that it can optimise generic > code into several times faster specialised code. That's what JIT compilers > are there for, and PyPy does a really good job at that. > > It's much harder to reach up to the performance of specialised, hand tuned > code, though. And there is a lot of specialised, hand tuned code in SciPy > and Sage, for example. That's a different kind of game than the "running > generic Python code faster than CPython" business, however worthy that is by > itself. > > Stefan We're not trying to compete though. The plan is to reuse specialized hand tuned code where it's there and compete in the areas where SciPy or NumPy doesn't cater well (like a python ufunc or a type specialization or a chain of array operations). Cheers, fijal From fijall at gmail.com Mon Oct 17 10:30:35 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 10:30:35 +0200 Subject: [pypy-dev] Success histories needed In-Reply-To: <1774912.BUcLc4JMg6@hunter-laptop.tontut.fi> References: <1774912.BUcLc4JMg6@hunter-laptop.tontut.fi> Message-ID: On Mon, Oct 17, 2011 at 10:17 AM, Alex Pyattaev wrote: > I have a fully-functional wireless network simulation tool written in > pypy+swig. Is that nice? Have couple papers to refer to as well. If you want I > could write a small abstract on how it was so great to use pypy (which, in > fact, was really great)? Please :) From bokr at oz.net Mon Oct 17 12:12:16 2011 From: bokr at oz.net (Bengt Richter) Date: Mon, 17 Oct 2011 12:12:16 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: Message-ID: <4E9BFF80.4070006@oz.net> On 10/17/2011 12:10 AM Armin Rigo wrote: > Hi, > > On Sun, Oct 16, 2011 at 23:41, David Cournapeau wrote: >> Interesting to know. But then, wouldn't this limit the speed gains to >> be expected from the JIT ? > > Yes, to some extent. It cannot give you the last bit of performance > improvements you could expect from arithmetic optimizations, but (as > usual) you get already the several-times improvements of e.g. removing > the boxing and unboxing of float objects. Personally I'm wary of > going down that path, because it means that the results we get could > suddenly change their least significant digit(s) when the JIT kicks > in. At least there are multiple tests in the standard Python test > suite that would fail because of that. > >> And I am not sure I understand how you can "not go there" if you want >> to vectorize code to use SIMD instruction sets ? > > I'll leave fijal to answer this question in detail :-) I suppose that > the goal is first to use SIMD when explicitly requested in the RPython > source, in the numpy code that operate on matrices; and not do the > harder job of automatically unrolling and SIMD-ing loops containing > Python float operations. But even the later could be done without > giving up on the idea that all Python operations should be present in > a bit-exact way (e.g. by using SIMD on 64-bit floats, not on 32-bit > floats). > > > A bient?t, > > Armin. I'm wondering how you handle high level loop optimizations vs floating point order-sensitive calculations. E.g., if a source loop has z[i]=a*b*c, might you hoist b*c without considering that assert a*b*c == a*(b*c) might fail, as in >>> a=b=1e-200; c=1e200 >>> assert a*b*c == a*(b*c) Traceback (most recent call last): File "", line 1, in AssertionError >>> a*b*c, a*(b*c) (0.0, 1e-200) Regards, Bengt Richter From stefan_ml at behnel.de Mon Oct 17 12:11:48 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 17 Oct 2011 12:11:48 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: David Cournapeau, 17.10.2011 00:01: > On Sun, Oct 16, 2011 at 10:20 PM, Ian Ozsvald wrote: >> how big is the scipy ecosystem beyond numpy? What's the rough line >> count for Python, C, Fortran etc that depends on numpy? > > The ecosystem is pretty big. There are at least in the order of > hundred of packages that depend directly on numpy and scipy. > > For scipy alone, the raw count is around 150k-300k LOC (it is a bit > hard to estimate because we include some swig-generated code that I > have ignored here, and some code duplication to deal with distutils > insanity). There is around 80k LOC of fortran alone in there. > > More and more scientific code use cython for speed or just for > interfacing with C (and recently C++). Other tools have been used for > similar reasons (f2py, in particular, to automatically wrap fortran > and C). and fwrap nowadays, which also generates glue code for talking to Fortran from Cython code, through a thin C code wrapper (AFAIK). > f2py at least is quite tightly coupled to numpy C API. I know > there is work for a pypy-friendly backend for cython, but I don't know > where things are there. It's, erm, resting. The GSoC is over, the code hasn't been merged into mainline yet, lacks support for some recent Cython language features and is not in a state that would allow building anything major with it right away. It's based on ctypes, so it suffers from the same problems as ctypes, namely API/ABI inconsistencies beyond those that "ctypes_configure" can handle. In particular, things like talking to C macros will at least require additional C glue code to be generated, which doesn't currently happen. What works is the stripping of Cython specific syntax off the code and to map "regular" C code interactions to corresponding ctypes calls. So, some things work as it is, everything else needs more work. Helping hands and funding are welcome. That being said, I still think it's a promising approach, and it would be very interesting for PyPy to support Cython code (in one way or another). Cython certainly has a good standing in the Scientific Python community these days. If PyPy wants to enter as well, it will have to show that it can easily and efficiently interface with the huge amount of existing scientific code out there, be it C, C++, Fortran, Cython or whatever. And rewriting the code or even just the wrappers for Yet Another Python Implementation is not a scalable solution to that problem. > I would like to see less C boilerplate code in scipy, and more cython > usage (which generates faster code and is much more maitainable); this > can also benefit pypy, if only for making the scipy code less > dependend on CPython details. And by making the implementation essentially Python. That way, it can much more easily be ported to other Python platforms, especially PyPy, than if you have to start by reverse engineering even the exact wrapper signature from C code. Stefan From alex.gaynor at gmail.com Mon Oct 17 13:26:04 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 17 Oct 2011 07:26:04 -0400 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: <4E9BFF80.4070006@oz.net> References: <4E9BFF80.4070006@oz.net> Message-ID: On Mon, Oct 17, 2011 at 6:12 AM, Bengt Richter wrote: > On 10/17/2011 12:10 AM Armin Rigo wrote: > >> Hi, >> >> On Sun, Oct 16, 2011 at 23:41, David Cournapeau >> wrote: >> >>> Interesting to know. But then, wouldn't this limit the speed gains to >>> be expected from the JIT ? >>> >> >> Yes, to some extent. It cannot give you the last bit of performance >> improvements you could expect from arithmetic optimizations, but (as >> usual) you get already the several-times improvements of e.g. removing >> the boxing and unboxing of float objects. Personally I'm wary of >> going down that path, because it means that the results we get could >> suddenly change their least significant digit(s) when the JIT kicks >> in. At least there are multiple tests in the standard Python test >> suite that would fail because of that. >> >> And I am not sure I understand how you can "not go there" if you want >>> to vectorize code to use SIMD instruction sets ? >>> >> >> I'll leave fijal to answer this question in detail :-) I suppose that >> the goal is first to use SIMD when explicitly requested in the RPython >> source, in the numpy code that operate on matrices; and not do the >> harder job of automatically unrolling and SIMD-ing loops containing >> Python float operations. But even the later could be done without >> giving up on the idea that all Python operations should be present in >> a bit-exact way (e.g. by using SIMD on 64-bit floats, not on 32-bit >> floats). >> >> >> A bient?t, >> >> Armin. >> > I'm wondering how you handle high level loop optimizations vs > floating point order-sensitive calculations. E.g., if a source loop > has z[i]=a*b*c, might you hoist b*c without considering that > assert a*b*c == a*(b*c) might fail, as in > > >>> a=b=1e-200; c=1e200 > >>> assert a*b*c == a*(b*c) > Traceback (most recent call last): > File "", line 1, in > AssertionError > >>> a*b*c, a*(b*c) > (0.0, 1e-200) > > Regards, > Bengt Richter > > > > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > No, you would never hoist b * c because b * c isn't an operation in that loop, the only ops that exist are: t1 = a * b t2 = t1 * c z[i] = t2 even if we did do arithmetic reassosciation (which we don't, yet), you can't do them on floats. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From ian at ianozsvald.com Mon Oct 17 14:18:56 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Mon, 17 Oct 2011 13:18:56 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > The ecosystem is pretty big. There are at least in the order of > hundred of packages that depend directly on numpy and scipy. > > For scipy alone, the raw count is around 150k-300k LOC (it is a bit > hard to estimate because we include some swig-generated code that I > have ignored here, and some code duplication to deal with distutils > insanity). There is around 80k LOC of fortran alone in there. Hi David, thanks for the numbers. Travis has posted a long discussion: http://technicaldiscovery.blogspot.com/2011/10/thoughts-on-porting-numpy-to-pypy.html and a few other points are raised at HackerNews: http://news.ycombinator.com/item?id=3118620 Whilst I understand Fijal's point about having a fast/lightweight demo of numpy I'm not sure what value this really brings to the project (I'll post this to Fijal's answer in a moment). If it isolates the rest of the numpy ecosystem (since it doesn't have a compatible C API) then only a fraction of people will be able to use it and it won't open a roadmap for increased library support, surely? As an example - I want numpy for client work. For my clients (the main being a physics company that is replacing Fortran with Python) numpy is at the heart of their simulations. However - numpy is used with matplotlib and pyCUDA and parts of scipy. If basic tools like FFT aren't available *and compatible* (i.e. not new implementations but running on tried, trusted and consistent C libs) then there'd be little reason to use pypy+numpy. pyCUDA could be a longer term goal but matplotlib would be essential. I note that many scientists won't switch to Python 3 due to lack of library support. numpy caught up with Py3 earlier in the year and matplotlib followed recently (so I guess SciPy itself will follow). Can we look at the details of the py3 porting process to get an idea of the complexity of the pypy-numpy + scipy project? Ian. From ian at ianozsvald.com Mon Oct 17 14:29:33 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Mon, 17 Oct 2011 13:29:33 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > This would be very useful since > you don't need to use Cython or any other things like this to provide > working code and it already caters for some group of people. Hi Fijal. This would be useful for a demo - but will it be useful for the userbase that becomes motivated to integrate Cython and SciPy? If it isn't useful to the wider community (which is the point I've made after David's email) then aren't we creating a (potentially) dead-end project rather than one that opens the doors to increased collaboration between the communities? Perhaps I should ask a wider question: If the pypy-numpy project only supports the core features of numpy and not the API (so excluding Cython/SciPy etc for now), what's the roadmap that lets people integrate SciPy's C/Fortran code in a maintainable way? I.e. how is the door opened to community members to introduce SciPy compatibility? Some idea of the complexity of the task would be very useful, preferably with input from people involved with CPython's numpy/scipy internals. i. -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From alex.gaynor at gmail.com Mon Oct 17 14:35:01 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 17 Oct 2011 08:35:01 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 8:29 AM, Ian Ozsvald wrote: > > This would be very useful since > > you don't need to use Cython or any other things like this to provide > > working code and it already caters for some group of people. > > Hi Fijal. This would be useful for a demo - but will it be useful for > the userbase that becomes motivated to integrate Cython and SciPy? > > If it isn't useful to the wider community (which is the point I've > made after David's email) then aren't we creating a (potentially) > dead-end project rather than one that opens the doors to increased > collaboration between the communities? > > Perhaps I should ask a wider question: If the pypy-numpy project only > supports the core features of numpy and not the API (so excluding > Cython/SciPy etc for now), what's the roadmap that lets people > integrate SciPy's C/Fortran code in a maintainable way? I.e. how is > the door opened to community members to introduce SciPy compatibility? > Some idea of the complexity of the task would be very useful, > preferably with input from people involved with CPython's numpy/scipy > internals. > > i. > > -- > Ian Ozsvald (A.I. researcher) > ian at IanOzsvald.com > > http://IanOzsvald.com > http://MorConsulting.com/ > http://StrongSteam.com/ > http://SocialTiesApp.com/ > http://TheScreencastingHandbook.com > http://FivePoundApp.com/ > http://twitter.com/IanOzsvald > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Let me ask the opposite question: What route do you envision that gives us both the speed we (and everyone else) desires, while not having any of these issues? That's not a question that has a very good answer I think. We could bring CPyExt up to a point where NumPy's C code runs, but that would be slow and a royal pain in the ass. What's the other alternative? We could port all of NumPy to be pure Python + wrapping code only for existing C/Fortran libraries. To be honest, that sounds very swell to me, would the core-numpy people go for that? Of course not, because it would be heinously slow on CPython, which I presume is unacceptable. So where does that leave us? Neither of the current platforms seems acceptable, what's the way forward where we're all in this together, because I'm having trouble seeing that (especially if, as Travis's post indicates backwards compatibility within NumPy means that none of the C APIs can be removed). Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at gmail.com Mon Oct 17 15:22:22 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Mon, 17 Oct 2011 14:22:22 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On 17 October 2011 13:35, Alex Gaynor wrote: > > > On Mon, Oct 17, 2011 at 8:29 AM, Ian Ozsvald wrote: > >> > This would be very useful since >> > you don't need to use Cython or any other things like this to provide >> > working code and it already caters for some group of people. >> >> Hi Fijal. This would be useful for a demo - but will it be useful for >> the userbase that becomes motivated to integrate Cython and SciPy? >> >> If it isn't useful to the wider community (which is the point I've >> made after David's email) then aren't we creating a (potentially) >> dead-end project rather than one that opens the doors to increased >> collaboration between the communities? >> >> Perhaps I should ask a wider question: If the pypy-numpy project only >> supports the core features of numpy and not the API (so excluding >> Cython/SciPy etc for now), what's the roadmap that lets people >> integrate SciPy's C/Fortran code in a maintainable way? I.e. how is >> the door opened to community members to introduce SciPy compatibility? >> Some idea of the complexity of the task would be very useful, >> preferably with input from people involved with CPython's numpy/scipy >> internals. >> >> i. >> >> -- >> Ian Ozsvald (A.I. researcher) >> ian at IanOzsvald.com >> >> http://IanOzsvald.com >> http://MorConsulting.com/ >> http://StrongSteam.com/ >> http://SocialTiesApp.com/ >> http://TheScreencastingHandbook.com >> http://FivePoundApp.com/ >> http://twitter.com/IanOzsvald >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > Let me ask the opposite question: What route do you envision that gives us > both the speed we (and everyone else) desires, while not having any of these > issues? That's not a question that has a very good answer I think. We > could bring CPyExt up to a point where NumPy's C code runs, but that would > be slow and a royal pain in the ass. What's the other alternative? We > could port all of NumPy to be pure Python + wrapping code only for existing > C/Fortran libraries. To be honest, that sounds very swell to me, would the > core-numpy people go for that? Of course not, because it would be heinously > slow on CPython, which I presume is unacceptable. So where does that leave > us? Neither of the current platforms seems acceptable, what's the way > forward where we're all in this together, because I'm having trouble seeing > that (especially if, as Travis's post indicates backwards compatibility > within NumPy means that none of the C APIs can be removed). Travis' post seems to suggest that it is the responsibility of the *pypy* dev team to do the work necessary to integrate the numpy refactor (initially sponsored by Microsoft). That refactoring (smaller numpy core) seems like a great way forward for numpy - particularly if *it* wants to play well with multiple implementations, but it is unreasonable to expect the pypy team to pick that up! For pypy I can't see any better approach than the way they have taken. Once people are using numpy on pypy the limitations and missing parts will become clear, and not only will the way forward be more obvious but there will be more people involved to do the work. It seems odd to argue that extending numpy to pypy will be a net negative for the community! Sure there are some difficulties involved, just as there are difficulties with having multiple implementations in the first place, but the benefits are much greater. All the best, Michael Foord > > Alex > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From yeomanyaacov at gmail.com Mon Oct 17 15:48:13 2011 From: yeomanyaacov at gmail.com (Yaacov Finkelman) Date: Mon, 17 Oct 2011 09:48:13 -0400 Subject: [pypy-dev] Pypy+cython Message-ID: How hard would it be to get Pypy to import uncompiled cython files? If we could teach Pypy to ignore cython's additions to the language then the jit will provide the performance boost. Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Mon Oct 17 16:44:57 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 17 Oct 2011 16:44:57 +0200 Subject: [pypy-dev] Pypy+cython In-Reply-To: References: Message-ID: Yaacov Finkelman, 17.10.2011 15:48: > How hard would it be to get Pypy to import uncompiled cython files? If we > could teach Pypy to ignore cython's additions to the language then the jit > will provide the performance boost. It's not as easy as dropping the syntax extensions, they actually serve a purpose. ;-) Cython has a rather large and nifty type system. Basically, it includes everything from Python, everything from C and a couple of major things from C++, plus some special types that result from language features, such as the PEP3118 buffer support. Mapping the language to PyPy directly would mean that PyPy would have to understand the type system as well, in order to know what the code is actually working on. The only way I can see to support Cython code on top of PyPy without actually reimplementing the language is by mapping it to a simpler abstraction that mimics the type system to a certain extent. That's where the Python+ctypes backend approach came from that was started in a GSoC this year. Stefan From ian at ianozsvald.com Mon Oct 17 17:30:16 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Mon, 17 Oct 2011 16:30:16 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > Let me ask the opposite question: What route do you envision that gives us > both the speed we (and everyone else) desires, while not having any of these > issues? That's not a question that has a very good answer I think. Hi Alex. I don't have a proposed route. I'm (sadly) too ignorant, I'm voicing the issues that came up in conversation. Seeing as I offered to put up money but the proposed route might not achieve my aims (having a good numpy foundation running with PyPy which opens the door to scipy support) I figure that I need to ask some questions - even if only to reduce my own ignorance. This isn't to say I want to avoid donating (*not at all*) - I just want to understand what looks like a wider set of issues than I'd originally understood from the short discussion at EuroPython during the PyPy demo and call for donations. See the reply to Michael for an extra detail. i. -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From ian at ianozsvald.com Mon Oct 17 17:42:21 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Mon, 17 Oct 2011 16:42:21 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > For pypy I can't see any better approach than the way they have taken. Once > people are using numpy on pypy the limitations and missing parts will become > clear, and not only will the way forward be more obvious but there will be > more people involved to do the work. Michael - I agree that the PyPy community shouldn't do all the legwork! I agree also that the proposed path may spur more work (and maybe that's the best goal for now). I've gone back to the donations page: http://pypy.org/numpydonate.html to re-read the spec. What I get now (but didn't get before the discussion at Enthought Cambridge) is that "we don't plan to implement NumPy's C API" is a big deal (and not taking it on is entirely reasonable for this project!). In my mind (and maybe in the mind of some others who use scipy?) a base pypy+numpy project would easily open the door to matplotlib and all the other scipy goodies, it looks now like that isn't the case. Hence my questions to try to understand what else might be involved. i. > Travis' post seems to suggest that it is the responsibility of the *pypy* > dev team to do the work necessary to integrate the numpy refactor (initially > sponsored by Microsoft). That refactoring (smaller numpy core) seems like a > great way forward for numpy - particularly if *it* wants to play well with > multiple implementations, but it is unreasonable to expect the pypy team to > pick that up! > > > It seems odd to argue that extending numpy to pypy will be a net negative > for the community! Sure there are some difficulties involved, just as there > are difficulties with having multiple implementations in the first place, > but the benefits are much greater. > > All the best, > > Michael Foord > >> >> Alex >> >> -- >> "I disapprove of what you say, but I will defend to the death your right >> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) >> "The people's good is the highest law." -- Cicero >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > > > -- > > http://www.voidspace.org.uk/ > > May you do good and not evil > May you find forgiveness for yourself and forgive others > > May you share freely, never taking more than you give. > -- the sqlite blessing http://www.sqlite.org/different.html > -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From fijall at gmail.com Mon Oct 17 17:46:27 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 17:46:27 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 5:30 PM, Ian Ozsvald wrote: >> Let me ask the opposite question: What route do you envision that gives us >> both the speed we (and everyone else) desires, while not having any of these >> issues? ?That's not a question that has a very good answer I think. > > Hi Alex. I don't have a proposed route. I'm (sadly) too ignorant, I'm > voicing the issues that came up in conversation. Seeing as I offered > to put up money but the proposed route might not achieve my aims > (having a good numpy foundation running with PyPy which opens the door > to scipy support) I figure that I need to ask some questions - even if > only to reduce my own ignorance. > > This isn't to say I want to avoid donating (*not at all*) - I just > want to understand what looks like a wider set of issues than I'd > originally understood from the short discussion at EuroPython during > the PyPy demo and call for donations. > > See the reply to Michael for an extra detail. > > i. > The call for donations precisely mentions the fact that scipy, matplotlib and a billion other libraries written in C/Cython won't work. It also precisely mentions that the C API of numpy won't be there. However, it'll still export raw pointers to arrays so you can call existing C/fortran code like blas. I admit this proposal does not cater for Travises usecase - have what he has now just faster. That was not the point. The point is we want to take numpy as a pretty good API and implement it better to cater for people who want to use it and have it nicely integrate with Python and be fast. FFT libraries won't work out of the box, but they should be relatively simply to get to run, without reimplementing algorithms. PyPy is not really trying to solve all the problems of the world - it's still work to adjust current code (like scipy) to work with different APIs and we won't cater for all of this, at least not immediately. I seriously don't buy it that it's a net loose for numpy community - having numpy running faster and nicely integrated with Python is a win for a lot of people already and that's good enough to try and see where it leads us. I'll reiterate because it seems this is misinterpreted again and again - pypy's numpy *will* integrate in some sort of way with existing C/fortran libraries, but this way *will* be different than current CPython C API. It's really just too hard to get both. Cheers, fijal From stefan_ml at behnel.de Mon Oct 17 18:01:03 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 17 Oct 2011 18:01:03 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Maciej Fijalkowski, 17.10.2011 17:46: > - pypy's numpy *will* integrate in some sort of way with existing > C/fortran libraries, but this way *will* be different than current > CPython C API. It's really just too hard to get both. Why reinvent yet another wheel when you could make Cython a common language to write extensions and wrapper code for both? Even if that requires a few feature restrictions for Cython users or adaptations to their code to keep it portable, it's still better than forcing users into a complete vendor lock-in on both sides. Stefan From fuzzyman at gmail.com Mon Oct 17 18:09:43 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Mon, 17 Oct 2011 17:09:43 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On 17 October 2011 16:42, Ian Ozsvald wrote: > > For pypy I can't see any better approach than the way they have taken. > Once > > people are using numpy on pypy the limitations and missing parts will > become > > clear, and not only will the way forward be more obvious but there will > be > > more people involved to do the work. > > Michael - I agree that the PyPy community shouldn't do all the > legwork! I agree also that the proposed path may spur more work (and > maybe that's the best goal for now). > > I've gone back to the donations page: > http://pypy.org/numpydonate.html > to re-read the spec. What I get now (but didn't get before the > discussion at Enthought Cambridge) is that "we don't plan to implement > NumPy's C API" is a big deal (and not taking it on is entirely > reasonable for this project!). > > In my mind (and maybe in the mind of some others who use scipy?) a > base pypy+numpy project would easily open the door to matplotlib and > all the other scipy goodies, it looks now like that isn't the case. > Hence my questions to try to understand what else might be involved. > > Well, I think it definitely "opens the door" - certainly a lot more than not doing the work! You have to start somewhere. It seems like other projects (like the pypy cython backend) will help make other parts of project easier down the line. Back to Alex's question, how else would you *suggest* starting? Isn't a core port of the central parts the obvious way to begin? Given the architecture of numpy it does seem that it opens up a whole bunch of questions around numpy on multiple implementations. Certainly pypy should be involved in the discussion here, but I don't think it is up to pypy to find (or implement) the answers... All the best, Michael > i. > > > Travis' post seems to suggest that it is the responsibility of the *pypy* > > dev team to do the work necessary to integrate the numpy refactor > (initially > > sponsored by Microsoft). That refactoring (smaller numpy core) seems like > a > > great way forward for numpy - particularly if *it* wants to play well > with > > multiple implementations, but it is unreasonable to expect the pypy team > to > > pick that up! > > > > > > It seems odd to argue that extending numpy to pypy will be a net negative > > for the community! Sure there are some difficulties involved, just as > there > > are difficulties with having multiple implementations in the first place, > > but the benefits are much greater. > > > > All the best, > > > > Michael Foord > > > >> > >> Alex > >> > >> -- > >> "I disapprove of what you say, but I will defend to the death your right > >> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > >> "The people's good is the highest law." -- Cicero > >> > >> > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> http://mail.python.org/mailman/listinfo/pypy-dev > >> > > > > > > > > -- > > > > http://www.voidspace.org.uk/ > > > > May you do good and not evil > > May you find forgiveness for yourself and forgive others > > > > May you share freely, never taking more than you give. > > -- the sqlite blessing http://www.sqlite.org/different.html > > > > > > -- > Ian Ozsvald (A.I. researcher) > ian at IanOzsvald.com > > http://IanOzsvald.com > http://MorConsulting.com/ > http://StrongSteam.com/ > http://SocialTiesApp.com/ > http://TheScreencastingHandbook.com > http://FivePoundApp.com/ > http://twitter.com/IanOzsvald > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Oct 17 18:13:12 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 18:13:12 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 6:01 PM, Stefan Behnel wrote: > Maciej Fijalkowski, 17.10.2011 17:46: >> >> - pypy's numpy *will* integrate in some sort of way with existing >> C/fortran libraries, but this way *will* be different than current >> CPython C API. It's really just too hard to get both. > > Why reinvent yet another wheel when you could make Cython a common language > to write extensions and wrapper code for both? Even if that requires a few > feature restrictions for Cython users or adaptations to their code to keep > it portable, it's still better than forcing users into a complete vendor > lock-in on both sides. Yeah, agreed. We don't have a C API at all and it's unlikely we'll implement something yet-completely-different. Cython is definitely very high on the list of things to consider for "a reasonable FFI". Cheers, fijal From alex.gaynor at gmail.com Mon Oct 17 18:14:46 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 17 Oct 2011 12:14:46 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 12:01 PM, Stefan Behnel wrote: > Maciej Fijalkowski, 17.10.2011 17:46: > > - pypy's numpy *will* integrate in some sort of way with existing >> C/fortran libraries, but this way *will* be different than current >> CPython C API. It's really just too hard to get both. >> > > Why reinvent yet another wheel when you could make Cython a common language > to write extensions and wrapper code for both? Even if that requires a few > feature restrictions for Cython users or adaptations to their code to keep > it portable, it's still better than forcing users into a complete vendor > lock-in on both sides. > > Stefan > > > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > There's no fundamental objection to Cython, but there are practical ones. a) Most of NumPy isn't Cython, so just having Cython gives us little. b) Is the NumPy on Cython house in order? AFAIK part of the MS project involved rewriting parts of NumPy in Cython and modularising Cython for targets besides CPython. And that this was *not* merged. For me to be convinced Cython is a good target, I'd need belief that there's an interest in it being a common platform, and when I see that there's work done, by core developers, which sits unmerged (with no timeline) I can't have faith in that. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Mon Oct 17 19:22:52 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 17 Oct 2011 13:22:52 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 1:20 PM, David Cournapeau wrote: > On Mon, Oct 17, 2011 at 2:22 PM, Michael Foord wrote: > > > > > Travis' post seems to suggest that it is the responsibility of the *pypy* > > dev team to do the work necessary to integrate the numpy refactor > (initially > > sponsored by Microsoft). That refactoring (smaller numpy core) seems like > a > > great way forward for numpy - particularly if *it* wants to play well > with > > multiple implementations, but it is unreasonable to expect the pypy team > to > > pick that up! > > I am pretty sure Travis did not intend to suggest that (I did not > understand that from his wordings, but maybe that's because we had > discussion in person on that topic several times already). > > There are a lot of reasons to do that refactor that has nothing to do > with pypy, so the idea is more: let's talk about what pypy would need > to make this refactor beneficial for pypy *as well*. I (and other) > have advocated using more cython inside numpy and scipy. We could > share resources to do that. > > > It seems odd to argue that extending numpy to pypy will be a net negative > > for the community! Sure there are some difficulties involved, just as > there > > are difficulties with having multiple implementations in the first place, > > but the benefits are much greater. > > The net negative would be the community split, with numpy losing some > resources taken by numpy on pypy. This seems like a plausible > situation. > > Without a C numpy API, you can't have scipy or matplotlib, no > scikit-learns, etc... But you could hide most of it behind cython, > which has momentum in the scientific community. Then a realistic > approach becomes: > - makes the cython+pypy backend a reality > - ideally make cython to wrap fortran a reality > - convert as much as possible from python C API to cython > > People of all level can participate. The first point in particular > could help pypy besides the scipy community. And that's a plan where > both parties would benefit from each other. > > cheers, > > David > > > > All the best, > > > > Michael Foord > > > >> > >> Alex > >> > >> -- > >> "I disapprove of what you say, but I will defend to the death your right > >> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > >> "The people's good is the highest law." -- Cicero > >> > >> > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> http://mail.python.org/mailman/listinfo/pypy-dev > >> > > > > > > > > -- > > > > http://www.voidspace.org.uk/ > > > > May you do good and not evil > > May you find forgiveness for yourself and forgive others > > > > May you share freely, never taking more than you give. > > -- the sqlite blessing http://www.sqlite.org/different.html > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > Why can't you have scipy and friends without a C-API? Presumabley it's all code that either manipulates an array or calls into an existing lib to manipulate an array. Why can't you write pure python code to manipulate arrays and then call into other libs via ctypes and friends? Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Oct 17 19:20:09 2011 From: cournape at gmail.com (David Cournapeau) Date: Mon, 17 Oct 2011 18:20:09 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 2:22 PM, Michael Foord wrote: > > Travis' post seems to suggest that it is the responsibility of the *pypy* > dev team to do the work necessary to integrate the numpy refactor (initially > sponsored by Microsoft). That refactoring (smaller numpy core) seems like a > great way forward for numpy - particularly if *it* wants to play well with > multiple implementations, but it is unreasonable to expect the pypy team to > pick that up! I am pretty sure Travis did not intend to suggest that (I did not understand that from his wordings, but maybe that's because we had discussion in person on that topic several times already). There are a lot of reasons to do that refactor that has nothing to do with pypy, so the idea is more: let's talk about what pypy would need to make this refactor beneficial for pypy *as well*. I (and other) have advocated using more cython inside numpy and scipy. We could share resources to do that. > It seems odd to argue that extending numpy to pypy will be a net negative > for the community! Sure there are some difficulties involved, just as there > are difficulties with having multiple implementations in the first place, > but the benefits are much greater. The net negative would be the community split, with numpy losing some resources taken by numpy on pypy. This seems like a plausible situation. Without a C numpy API, you can't have scipy or matplotlib, no scikit-learns, etc... But you could hide most of it behind cython, which has momentum in the scientific community. Then a realistic approach becomes: - makes the cython+pypy backend a reality - ideally make cython to wrap fortran a reality - convert as much as possible from python C API to cython People of all level can participate. The first point in particular could help pypy besides the scipy community. And that's a plan where both parties would benefit from each other. cheers, David > > All the best, > > Michael Foord > >> >> Alex >> >> -- >> "I disapprove of what you say, but I will defend to the death your right >> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) >> "The people's good is the highest law." -- Cicero >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > > > -- > > http://www.voidspace.org.uk/ > > May you do good and not evil > May you find forgiveness for yourself and forgive others > > May you share freely, never taking more than you give. > -- the sqlite blessing http://www.sqlite.org/different.html > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > From cournape at gmail.com Mon Oct 17 19:47:14 2011 From: cournape at gmail.com (David Cournapeau) Date: Mon, 17 Oct 2011 18:47:14 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 6:22 PM, Alex Gaynor wrote: > > Why can't you have scipy and friends without a C-API? ?Presumabley it's all > code that either manipulates an array or calls into an existing lib to > manipulate an array. ?Why can't you write pure python code to manipulate > arrays and then call into other libs via ctypes and friends? Sorry, I was not very clear: with scipy *as of today*, you can't make it work without supporting the numpy C API. What I meant by hiding is that once a code uses numpy C API and the python C API only through cython, it becomes much easier to support both CPython and pypy at least in principle (and the code is more maintainable). But scipy is basically pure python code + lots of pure fortran/C code + wrappers around it. Having something that automatically wraps C/fortran for pypy is something that seems reasonable for pypy people to do and would narrow the gap. cheers, David From lac at openend.se Mon Oct 17 19:57:17 2011 From: lac at openend.se (Laura Creighton) Date: Mon, 17 Oct 2011 19:57:17 +0200 Subject: [pypy-dev] PyPy Sprint Nov 2 - 9 in Gothenburg, Sweden Message-ID: <201110171757.p9HHvHgg018306@theraft.openend.se> PyPy G?teborg Post-Hallowe'en Sprint Nov 2nd - Nov 9th ========================================================= The next PyPy sprint will be in Gothenburg, Sweden. It is a public sprint, suitable for newcomers. We'll focus on making a public kickoff for both the `numpy/pypy integration project`_ and the `Py3k support project`_, as well as whatever interests the Sprint attendees. Since both of these projects are very new, there will be plenty of work suitable for newcomers to PyPy. .. _`numpy/pypy integration project`: http://pypy.org/numpydonate.html .. _`Py3k interpreter project`: http://pypy.org/py3donate.html Other topics might include: - Helping people get their code running with PyPy - work on a FSCons talk? - state of the STM Vinnova project (We most likely, but not for certain will know whether or not we are approved by this date.) Other Useful dates ------------------ GothPyCon_ - Saturday Oct 29. .. _GothPyCon: http://www.meetup.com/GothPy/events/32864862/ FSCONS_ Friday Nov 11 - Sunday Nov 12. .. _FSCONS: http://fscons.org/ Location -------- The sprint will be held in the apartment of Laura Creighton and Jacob Hall?n which is at G?tabergsgatan 22 in Gothenburg, Sweden. Here is a map_. This is in central Gothenburg. It is between the tram_ stops of Vasaplatsen and Valand, (a distance of 4 blocks) where many lines call -- the 2, 3, 4, 5, 7, 10 and 13. .. _tram: http://www.vasttrafik.se/en/ .. _map: http://bit.ly/grRuQe Probably cheapest and not too far away is to book accomodation at `SGS Veckobostader`_. The `Elite Park Avenyn Hotel`_ is a luxury hotel just a few blocks away. There are scores of hotels a short walk away from the sprint location, suitable for every budget, desire for luxury, and desire for the unusual. You could, for instance, stay on a `boat`_. Options are too numerous to go into here. Just ask in the mailing list or on the blog. .. _`SGS Veckobostader`: http://www.sgsveckobostader.se/en .. _`Elite Park Avenyn Hotel`: http://www.elite.se/hotell/goteborg/park/ .. _`boat`: http://www.liseberg.se/en/home/Accommodation/Hotel/Hotel-Barken-Viking/ Hours will be from 10:00 until people have had enough. It's a good idea to arrive a day before the sprint starts and leave a day later. In the middle of the sprint there usually is a break day and it's usually ok to take half-days off if you feel like it. Of course, many of you may be interested in sticking around for FSCons, held the weekend after the sprint. Good to Know ------------ Sweden is not part of the Euro zone. One SEK (krona in singular, kronor in plural) is roughly 1/10th of a Euro (9.36 SEK to 1 Euro). The venue is central in Gothenburg. There is a large selection of places to get food nearby, from edible-and-cheap to outstanding. We often cook meals together, so let us know if you have any food allergies, dislikes, or special requirements. Sweden uses the same kind of plugs as Germany. 230V AC. Getting Here ------------ If are coming train, you will arrive at the `Central Station`_. It is about 12 blocks to the site from there, or you can take a tram_. There are two airports which are local to G?teborg, `Landvetter`_ (the main one) and `Gothenburg City Airport`_ (where some budget airlines fly). If you arrive at `Landvetter`_ the airport bus stops right downtown at `Elite Park Avenyn Hotel`_ which is the second stop, 4 blocks from the Sprint site, as well as the end of the line, which is the `Central Station`_. If you arrive at `Gothenburg City Airport`_ take the bus to the end of the line. You will be at the `Central Station`_. You can also arrive by ferry_, from either Kiel in Germany or Frederikshavn in Denmark. .. _`Central Station`: http://bit.ly/fON43p .. _`Landvetter`: http://swedavia.se/en/Goteborg/Traveller-information/Traffic-information/ .. _`Gothenburg City Airport`: http://www.goteborgairport.se/eng.asp .. _ferry: http://www.stenaline.nl/en/ferry/ Who's Coming? -------------- If you'd like to come, please let us know when you will be arriving and leaving, as well as letting us know your interests We'll keep a list of `people`_ which we'll update (which you can do so yourself if you have bitbucket pypy commit rights). .. _`people`: https://bitbucket.org/pypy/extradoc/src/tip/sprintinfo/gothenburg-2011/people.txt From fijall at gmail.com Mon Oct 17 21:40:14 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Oct 2011 21:40:14 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 7:20 PM, David Cournapeau wrote: > On Mon, Oct 17, 2011 at 2:22 PM, Michael Foord wrote: > >> >> Travis' post seems to suggest that it is the responsibility of the *pypy* >> dev team to do the work necessary to integrate the numpy refactor (initially >> sponsored by Microsoft). That refactoring (smaller numpy core) seems like a >> great way forward for numpy - particularly if *it* wants to play well with >> multiple implementations, but it is unreasonable to expect the pypy team to >> pick that up! > > I am pretty sure Travis did not intend to suggest that (I did not > understand that from his wordings, but maybe that's because we had > discussion in person on that topic several times already). > > There are a lot of reasons to do that refactor that has nothing to do > with pypy, so the idea is more: let's talk about what pypy would need > to make this refactor beneficial for pypy *as well*. I (and other) > have advocated using more cython inside numpy and scipy. We could > share resources to do that. I think alex's question was whether the refactoring is going to be merged upstream or not (and what's the plan). I don't think you understand our point. Reusing the current numpy implementation is not giving us much *even* if it was all Cython and no C API. It's just that we can do cool stuff with the JIT. *Right now* operation chain like this: a, b, c = [numpy.arange(100) for i in range(3)] a + b - c becomes ... i = 0 while i < 100: res[i] = a[i] + b[i] - c[i] without allocating intermediates. In the near future we plan to implement this using SSE so it becomes even faster. It also applies to all kinds of operations that we implemented in RPython - ufuncs, castings etc. All of them get unrolled into a single loop right now, they can get nicely vectorized in the near future. Having numpy still implementing stuff in C doesn't buy us much - we wouldn't be able to do all the cool stuff we're doing now and we won't get all the speedups. That's why we don't reuse the current numpy and not because it uses C API. Now the scenario is slightly different with FFT and other more complex algorithms. We want to call existing C code with array pointers so we don't have to reimplement it. Now tell me - how us moving pieces of scipy or numpy to cython give us anything? > >> It seems odd to argue that extending numpy to pypy will be a net negative >> for the community! Sure there are some difficulties involved, just as there >> are difficulties with having multiple implementations in the first place, >> but the benefits are much greater. > > The net negative would be the community split, with numpy losing some > resources taken by numpy on pypy. This seems like a plausible > situation. So, you're saying that giving people the ability to run numpy code faster if the refrain from using scipy and matplotlib (for now) is producing the community split? How does it? My interpretation is that we want to give people powerful tools that can be used to achieve things not possible before - like not using cython but instead implementing it in python. I imagine how someone might not get value from that, but how does that decrease the value? > > Without a C numpy API, you can't have scipy or matplotlib, no > scikit-learns, etc... But you could hide most of it behind cython, > which has momentum in the scientific community. Then a realistic > approach becomes: > ?- makes the cython+pypy backend a reality > ?- ideally make cython to wrap fortran a reality > ?- convert as much as possible from python C API to cython > > People of all level can participate. The first point in particular > could help pypy besides the scipy community. And that's a plan where > both parties would benefit from each other. I think our priority right now is to provide a working numpy. Next point is to make it use SSE. Does that fit somehow with your plan? Cheers, fijal From arigo at tunes.org Mon Oct 17 21:42:02 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 17 Oct 2011 21:42:02 +0200 Subject: [pypy-dev] PyPy Sprint Nov 2 - 9 in Gothenburg, Sweden In-Reply-To: <201110171757.p9HHvHgg018306@theraft.openend.se> References: <201110171757.p9HHvHgg018306@theraft.openend.se> Message-ID: Hi, On Mon, Oct 17, 2011 at 19:57, Laura Creighton wrote: > .. _`people`: https://bitbucket.org/pypy/extradoc/src/tip/sprintinfo/gothenburg-2011/people.txt This is the file containing the previous sprint's attendance. I suggest we use instead a file in the correct directory, which is: https://bitbucket.org/pypy/extradoc/src/tip/sprintinfo/gothenburg-2011-2/people.txt A bient?t, Armin. From stefan_ml at behnel.de Mon Oct 17 23:14:08 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 17 Oct 2011 23:14:08 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Alex Gaynor, 17.10.2011 18:14: > On Mon, Oct 17, 2011 at 12:01 PM, Stefan Behnel wrote: >> Maciej Fijalkowski, 17.10.2011 17:46: >> - pypy's numpy *will* integrate in some sort of way with existing >>> C/fortran libraries, but this way *will* be different than current >>> CPython C API. It's really just too hard to get both. >> >> Why reinvent yet another wheel when you could make Cython a common language >> to write extensions and wrapper code for both? Even if that requires a few >> feature restrictions for Cython users or adaptations to their code to keep >> it portable, it's still better than forcing users into a complete vendor >> lock-in on both sides. > > There's no fundamental objection to Cython, but there are practical ones. I'm very well aware of that. There are both technical and practical issues. I didn't hide the fact that the Python+ctypes backend for Cython is quite far from being ready for use, for example. > a) Most of NumPy isn't Cython, so just having Cython gives us little. There has been the move towards a smaller core for NumPy, and we perceive substantial interest, both inside and outside of the Scientific Python community, in writing new wrapper code in Cython and even in rewriting existing code in Cython to make it more maintainable. Even generated wrappers were and are being rewritten, e.g. to get rid of SWIG. Rewriting several hundred to thousand lines of C code in Cython can often be done within a few days, depending on test coverage and code complexity, and from what we hear, this is actually being done or at least seriously considered in several projects. It's helped by the fact that CPython users do not have to make the switch right away, but can often migrate or add a module at a time. I agree that simply supporting Cython is not going to magically connect huge amounts of foreign code to PyPy. It just makes it a lot easier to get closer to that goal than by inventing yet another way of interfacing that is not supported by anything else. Also note that there isn't just NumPy. A relatively large part of Sage is written in Cython, for example, especially those parts that glue the rest together, which consists of huge amounts of C, C++ and Fortran code. After all, Cython's predecessor Pyrex has been around for almost ten years now. > b) > Is the NumPy on Cython house in order? AFAIK part of the MS project > involved rewriting parts of NumPy in Cython and modularising Cython for > targets besides CPython. And that this was *not* merged. For me to be > convinced Cython is a good target, I'd need belief that there's an interest > in it being a common platform, and when I see that there's work done, by > core developers, which sits unmerged (with no timeline) I can't have faith > in that. I understand that objection. The Cython project is largely driven by the interest of core developers and users (now, how unexpected is that?), and none of the developers currently uses IronPython or PyPy. So, while we'd like to see Cython support other targets (and the core developers agree on that goal), there isn't really a strong incentive for ourselves to move it into that direction. It's a bit of a chicken and egg problem - why support other platforms that no-one uses it for, and who'd use it on a platform that's not as well supported as CPython? I'd personally like to get the ctypes backend merged, but it's not exactly in a state that is ready-to-merge soonish. There's a branch, and Romain (our GSoC student for the project) is still working on it, but obviously with much less time for it, so I'm sure he could use another helping hand. https://github.com/hardshooter/CythonCTypesBackend The IronPython port is a different beast. It ran almost completely in cloak mode, outside of the scope of the core developers, and it's neither clear what the exact design goals were, nor what was eventually achieved or in what status the code branch currently is. The project itself died from sudden lack of interest on the side of the financial supporters (MS) at some point, and it appears that there is currently no-one who can easily take it over. Sad, but really nothing to blame the Cython developers for. I'd be happy to see it revived, if there is any interest. https://bitbucket.org/cwitty/cython-for-ironpython/overview Stefan From cournape at gmail.com Mon Oct 17 22:18:32 2011 From: cournape at gmail.com (David Cournapeau) Date: Mon, 17 Oct 2011 21:18:32 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 8:40 PM, Maciej Fijalkowski wrote: > On Mon, Oct 17, 2011 at 7:20 PM, David Cournapeau wrote: >> On Mon, Oct 17, 2011 at 2:22 PM, Michael Foord wrote: >> >>> >>> Travis' post seems to suggest that it is the responsibility of the *pypy* >>> dev team to do the work necessary to integrate the numpy refactor (initially >>> sponsored by Microsoft). That refactoring (smaller numpy core) seems like a >>> great way forward for numpy - particularly if *it* wants to play well with >>> multiple implementations, but it is unreasonable to expect the pypy team to >>> pick that up! >> >> I am pretty sure Travis did not intend to suggest that (I did not >> understand that from his wordings, but maybe that's because we had >> discussion in person on that topic several times already). >> >> There are a lot of reasons to do that refactor that has nothing to do >> with pypy, so the idea is more: let's talk about what pypy would need >> to make this refactor beneficial for pypy *as well*. I (and other) >> have advocated using more cython inside numpy and scipy. We could >> share resources to do that. > > I think alex's question was whether the refactoring is going to be > merged upstream or not (and what's the plan). I don't know if the refactoring will be merged as is, but at least I think the refactoring needs to happen, independently of pypy. There is no denying that parts of numpy's code are crufty, some stuff not clearly separated, etc... > I don't think you understand our point. I really do. I understand that pypy is a much better platform than cpython to do lazy evalution, fast pure python ufunc. Nobody denies that. To be even clearer: if the goal is to have some concept of array which looks like numpy, then yes, using numpy's code is useless. > Reusing the current numpy > implementation is not giving us much *even* if it was all Cython and > no C API. This seems to be the source of the disagreement: I think reusing numpy means that you are much more likely to be able to run the existing scripts using numpy on top of pypy. So my question is whether the disagreement is on the value of that, or whether pypy community generally thinks they can rewrite a "numpypy" which is a drop-in replacement of numpy on cpython without using original numpy's code. > So, you're saying that giving people the ability to run numpy code > faster if the refrain from using scipy and matplotlib (for now) is > producing the community split? How does it? My interpretation is that > we want to give people powerful tools that can be used to achieve > things not possible before - like not using cython but instead > implementing it in python. I imagine how someone might not get value > from that, but how does that decrease the value? It is not my place to questioning anyone's value, we all have our different usages. But the split is obvious: you may have scientific code which works on numpy+pypy and does not on numpy+python, and vice and versa. > I think our priority right now is to provide a working numpy. Next > point is to make it use SSE. Does that fit somehow with your plan? I guess there is an ambiguity in the exact meaning of "working numpy". Something that looks like numpy with cool features from pypy, or something that can be used as a drop-in of numpy (any script using numpy will work with the numpy+pypy). If it is the former, than again, I would agree that there is not much point in reusing numpy's code. But then, I think calling it numpy is a bit confusing. cheers, David From romain.py at gmail.com Tue Oct 18 01:17:17 2011 From: romain.py at gmail.com (Romain Guillebert) Date: Tue, 18 Oct 2011 01:17:17 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: <20111017231717.GB9121@hardshooter> Hi everyone I guess people want to know what is the current status of the ctypes backend for Cython, you can read the last status update there : http://mail.python.org/pipermail/pypy-dev/2011-September/008260.html Of course I'm available for any kind of questions :) Cheers Romain On Mon, Oct 17, 2011 at 11:14:08PM +0200, Stefan Behnel wrote: > Alex Gaynor, 17.10.2011 18:14: > >On Mon, Oct 17, 2011 at 12:01 PM, Stefan Behnel wrote: > >>Maciej Fijalkowski, 17.10.2011 17:46: > >> - pypy's numpy *will* integrate in some sort of way with existing > >>>C/fortran libraries, but this way *will* be different than current > >>>CPython C API. It's really just too hard to get both. > >> > >>Why reinvent yet another wheel when you could make Cython a common language > >>to write extensions and wrapper code for both? Even if that requires a few > >>feature restrictions for Cython users or adaptations to their code to keep > >>it portable, it's still better than forcing users into a complete vendor > >>lock-in on both sides. > > > >There's no fundamental objection to Cython, but there are practical ones. > > I'm very well aware of that. There are both technical and practical > issues. I didn't hide the fact that the Python+ctypes backend for > Cython is quite far from being ready for use, for example. > > > > a) Most of NumPy isn't Cython, so just having Cython gives us little. > > There has been the move towards a smaller core for NumPy, and we > perceive substantial interest, both inside and outside of the > Scientific Python community, in writing new wrapper code in Cython > and even in rewriting existing code in Cython to make it more > maintainable. Even generated wrappers were and are being rewritten, > e.g. to get rid of SWIG. Rewriting several hundred to thousand lines > of C code in Cython can often be done within a few days, depending > on test coverage and code complexity, and from what we hear, this is > actually being done or at least seriously considered in several > projects. It's helped by the fact that CPython users do not have to > make the switch right away, but can often migrate or add a module at > a time. > > I agree that simply supporting Cython is not going to magically > connect huge amounts of foreign code to PyPy. It just makes it a lot > easier to get closer to that goal than by inventing yet another way > of interfacing that is not supported by anything else. > > Also note that there isn't just NumPy. A relatively large part of > Sage is written in Cython, for example, especially those parts that > glue the rest together, which consists of huge amounts of C, C++ and > Fortran code. After all, Cython's predecessor Pyrex has been around > for almost ten years now. > > > > b) > >Is the NumPy on Cython house in order? AFAIK part of the MS project > >involved rewriting parts of NumPy in Cython and modularising Cython for > >targets besides CPython. And that this was *not* merged. For me to be > >convinced Cython is a good target, I'd need belief that there's an interest > >in it being a common platform, and when I see that there's work done, by > >core developers, which sits unmerged (with no timeline) I can't have faith > >in that. > > I understand that objection. The Cython project is largely driven by > the interest of core developers and users (now, how unexpected is > that?), and none of the developers currently uses IronPython or > PyPy. So, while we'd like to see Cython support other targets (and > the core developers agree on that goal), there isn't really a strong > incentive for ourselves to move it into that direction. It's a bit > of a chicken and egg problem - why support other platforms that > no-one uses it for, and who'd use it on a platform that's not as > well supported as CPython? > > I'd personally like to get the ctypes backend merged, but it's not > exactly in a state that is ready-to-merge soonish. There's a branch, > and Romain (our GSoC student for the project) is still working on > it, but obviously with much less time for it, so I'm sure he could > use another helping hand. > > https://github.com/hardshooter/CythonCTypesBackend > > The IronPython port is a different beast. It ran almost completely > in cloak mode, outside of the scope of the core developers, and it's > neither clear what the exact design goals were, nor what was > eventually achieved or in what status the code branch currently is. > The project itself died from sudden lack of interest on the side of > the financial supporters (MS) at some point, and it appears that > there is currently no-one who can easily take it over. Sad, but > really nothing to blame the Cython developers for. I'd be happy to > see it revived, if there is any interest. > > https://bitbucket.org/cwitty/cython-for-ironpython/overview > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From lac at openend.se Tue Oct 18 06:14:03 2011 From: lac at openend.se (Laura Creighton) Date: Tue, 18 Oct 2011 06:14:03 +0200 Subject: [pypy-dev] PyPy Sprint Nov 2 - 9 in Gothenburg, Sweden In-Reply-To: Message from Armin Rigo of "Mon, 17 Oct 2011 21:42:02 +0200." References: <201110171757.p9HHvHgg018306@theraft.openend.se> Message-ID: <201110180414.p9I4E32i018749@theraft.openend.se> In a message of Mon, 17 Oct 2011 21:42:02 +0200, Armin Rigo writes: >Hi, > >On Mon, Oct 17, 2011 at 19:57, Laura Creighton wrote: >> .. _`people`: https://bitbucket.org/pypy/extradoc/src/tip/sprintinfo/go >th= >enburg-2011/people.txt > >This is the file containing the previous sprint's attendance. I >suggest we use instead a file in the correct directory, which is: > >https://bitbucket.org/pypy/extradoc/src/tip/sprintinfo/gothenburg-2011-2/ >pe= >ople.txt > > >A bient??t, > >Armin. Arrgh, I forgot to edit that line when starting with the older sprint announcement. Thank you for noticing this. Sorry. Laura From bokr at oz.net Tue Oct 18 09:08:52 2011 From: bokr at oz.net (Bengt Richter) Date: Tue, 18 Oct 2011 09:08:52 +0200 Subject: [pypy-dev] Contributing to pypy [especially numpy] In-Reply-To: References: <4E9BFF80.4070006@oz.net> Message-ID: <4E9D2604.6030609@oz.net> On 10/17/2011 01:26 PM Alex Gaynor wrote: > On Mon, Oct 17, 2011 at 6:12 AM, Bengt Richter wrote: > >> On 10/17/2011 12:10 AM Armin Rigo wrote: >> >>> Hi, >>> >>> On Sun, Oct 16, 2011 at 23:41, David Cournapeau >>> wrote: >>> >>>> Interesting to know. But then, wouldn't this limit the speed gains to >>>> be expected from the JIT ? >>>> >>> >>> Yes, to some extent. It cannot give you the last bit of performance >>> improvements you could expect from arithmetic optimizations, but (as >>> usual) you get already the several-times improvements of e.g. removing >>> the boxing and unboxing of float objects. Personally I'm wary of >>> going down that path, because it means that the results we get could >>> suddenly change their least significant digit(s) when the JIT kicks >>> in. At least there are multiple tests in the standard Python test >>> suite that would fail because of that. >>> >>> And I am not sure I understand how you can "not go there" if you want >>>> to vectorize code to use SIMD instruction sets ? >>>> >>> >>> I'll leave fijal to answer this question in detail :-) I suppose that >>> the goal is first to use SIMD when explicitly requested in the RPython >>> source, in the numpy code that operate on matrices; and not do the >>> harder job of automatically unrolling and SIMD-ing loops containing >>> Python float operations. But even the later could be done without >>> giving up on the idea that all Python operations should be present in >>> a bit-exact way (e.g. by using SIMD on 64-bit floats, not on 32-bit >>> floats). >>> >>> >>> A bient??t, >>> >>> Armin. >>> >> I'm wondering how you handle high level loop optimizations vs >> floating point order-sensitive calculations. E.g., if a source loop >> has z[i]=a*b*c, might you hoist b*c without considering that >> assert a*b*c == a*(b*c) might fail, as in >> >>>>> a=b=1e-200; c=1e200 >>>>> assert a*b*c == a*(b*c) >> Traceback (most recent call last): >> File "", line 1, in >> AssertionError >>>>> a*b*c, a*(b*c) >> (0.0, 1e-200) >> >> Regards, >> Bengt Richter >> >> >> >> ______________________________**_________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/**mailman/listinfo/pypy-dev >> > > No, you would never hoist b * c because b * c isn't an operation in that > loop, the only ops that exist are: > > t1 = a * b > t2 = t1 * c > z[i] = t2 > d'oh > even if we did do arithmetic reassosciation (which we don't, yet), you can't > do them on floats. Hm, what if you could statically prove that the fp ops gave bit-wise exactly the same results when reordered (given you have 53 significant bits to play with)? (Maybe more theoretical question than practical). Regards, Bengt Richter P.S. What did you mean with the teaser, "(which we don't, yet)" ? When would you? From fijall at gmail.com Tue Oct 18 11:20:04 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 18 Oct 2011 11:20:04 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 10:18 PM, David Cournapeau wrote: > On Mon, Oct 17, 2011 at 8:40 PM, Maciej Fijalkowski wrote: >> On Mon, Oct 17, 2011 at 7:20 PM, David Cournapeau wrote: >>> On Mon, Oct 17, 2011 at 2:22 PM, Michael Foord wrote: >>> >>>> >>>> Travis' post seems to suggest that it is the responsibility of the *pypy* >>>> dev team to do the work necessary to integrate the numpy refactor (initially >>>> sponsored by Microsoft). That refactoring (smaller numpy core) seems like a >>>> great way forward for numpy - particularly if *it* wants to play well with >>>> multiple implementations, but it is unreasonable to expect the pypy team to >>>> pick that up! >>> >>> I am pretty sure Travis did not intend to suggest that (I did not >>> understand that from his wordings, but maybe that's because we had >>> discussion in person on that topic several times already). >>> >>> There are a lot of reasons to do that refactor that has nothing to do >>> with pypy, so the idea is more: let's talk about what pypy would need >>> to make this refactor beneficial for pypy *as well*. I (and other) >>> have advocated using more cython inside numpy and scipy. We could >>> share resources to do that. >> >> I think alex's question was whether the refactoring is going to be >> merged upstream or not (and what's the plan). > > I don't know if the refactoring will be merged as is, but at least I > think the refactoring needs to happen, independently of pypy. There is > no denying that parts of numpy's code are crufty, some stuff not > clearly separated, etc... > >> I don't think you understand our point. > > I really do. I understand that pypy is a much better platform than > cpython to do lazy evalution, fast pure python ufunc. Nobody denies > that. To be even clearer: if the goal is to have some concept of array > which looks like numpy, then yes, using numpy's code is useless. > >> Reusing the current numpy >> implementation is not giving us much *even* if it was all Cython and >> no C API. > > This seems to be the source of the disagreement: I think reusing numpy > means that you are much more likely to be able to run the existing > scripts using numpy on top of pypy. So my question is whether the > disagreement is on the value of that, or whether pypy community > generally thinks they can rewrite a "numpypy" which is a drop-in > replacement of numpy on cpython without using original numpy's code. > Ok Reusing numpy is maybe more likely to run the existing code indeed, but we'll take care to be compatible (same with Python as a language actually). Reusing the CPython C API parts of numpy however does mean that we nullify all the good parts of pypy - this is entirely pointless from my perspective. I can't see how you can get both JIT running nicely and reuse most of numpy. You have to sacrifice something and I would be willing to sacrifice code reuse. Indeed you would end up with two numpy implementations but it's not like numpy is changing that much after all. We can provide a cython or some sort of API to integrate with the existing legacy code later, but the point stays - I can't see the plan of using cool parts of pypy and numpy together. This is the question of what is harder - writing a reasonable JIT or writing numpy. I would say numpy and you guys seems to say JIT. Cheers, fijal From dirkjan at ochtman.nl Tue Oct 18 11:34:19 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 18 Oct 2011 11:34:19 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Tue, Oct 18, 2011 at 11:20, Maciej Fijalkowski wrote: > numpy together. This is the question of what is harder - writing a > reasonable JIT or writing numpy. I would say numpy and you guys seems > to say JIT. I'm confused -- I'm fairly convinced you think that a reasonable JIT is harder than writing numpy, and not the other way around? Cheers, Dirkjan From fuzzyman at gmail.com Tue Oct 18 11:44:32 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Tue, 18 Oct 2011 10:44:32 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On 17 October 2011 18:20, David Cournapeau wrote: [snip...] On Mon, Oct 17, 2011 at 2:22 PM, Michael Foord wrote: > > It seems odd to argue that extending numpy to pypy will be a net negative > > for the community! Sure there are some difficulties involved, just as > there > > are difficulties with having multiple implementations in the first place, > > but the benefits are much greater. > > The net negative would be the community split, with numpy losing some > resources taken by numpy on pypy. This seems like a plausible > situation. > > Note that this is *exactly* the same "negative" that Python itself faces with multiple implementations. It has in fact been a great positive, widening the community and improving Python (and yes sometimes improving it by pointing out its problems). All the best, Michael > Without a C numpy API, you can't have scipy or matplotlib, no > scikit-learns, etc... But you could hide most of it behind cython, > which has momentum in the scientific community. Then a realistic > approach becomes: > - makes the cython+pypy backend a reality > - ideally make cython to wrap fortran a reality > - convert as much as possible from python C API to cython > > People of all level can participate. The first point in particular > could help pypy besides the scipy community. And that's a plan where > both parties would benefit from each other. > > cheers, > > David > > > > All the best, > > > > Michael Foord > > > >> > >> Alex > >> > >> -- > >> "I disapprove of what you say, but I will defend to the death your right > >> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > >> "The people's good is the highest law." -- Cicero > >> > >> > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> http://mail.python.org/mailman/listinfo/pypy-dev > >> > > > > > > > > -- > > > > http://www.voidspace.org.uk/ > > > > May you do good and not evil > > May you find forgiveness for yourself and forgive others > > > > May you share freely, never taking more than you give. > > -- the sqlite blessing http://www.sqlite.org/different.html > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Oct 18 11:58:40 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 18 Oct 2011 11:58:40 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Hi, On Tue, Oct 18, 2011 at 11:34, Dirkjan Ochtman wrote: > I'm confused -- I'm fairly convinced you think that a reasonable JIT > is harder than writing numpy, and not the other way around? Let me chime in --- applying the JIT to "numpypy" or to any other piece of RPython code is, if not trivial, at least very straightforward. That's what our "JIT generator" does for you. In comparison, writing numpy (in whatever way, including all the discussions here) is a much longer task. Fijal is sticking to his point, which is that if we rewrite (large parts of) numpy in RPython, we are getting JIT support for free; but if we are *only* going down the route of interfacing with existing pieces of C code, we don't get any JIT in the end, and the performance will just suck. Not to mention that from my point of view it's clear which of the two paths is best to attract newcomers to pypy. A bient?t, Armin. From fijall at gmail.com Tue Oct 18 12:09:02 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 18 Oct 2011 12:09:02 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Tue, Oct 18, 2011 at 11:34 AM, Dirkjan Ochtman wrote: > On Tue, Oct 18, 2011 at 11:20, Maciej Fijalkowski wrote: >> numpy together. This is the question of what is harder - writing a >> reasonable JIT or writing numpy. I would say numpy and you guys seems >> to say JIT. > > I'm confused -- I'm fairly convinced you think that a reasonable JIT > is harder than writing numpy, and not the other way around? Yes, of course you're right :-) From cfbolz at gmx.de Tue Oct 18 13:40:48 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 18 Oct 2011 13:40:48 +0200 Subject: [pypy-dev] Developer selection for Py3k and Numpy Message-ID: <4E9D65C0.5090607@gmx.de> Hi all, Now that we are getting in some money for our Py3k [1] and Numpy [2] funding proposals (thank you very very much, for everybody who contributed!) it is time to think more concretely about the actual execution. Therefore I want to ask for PyPy developers that are interested in getting paid for their work on the first steps of the Numpy or Py3k proposals to step forward. To be applicable you need to be an experienced PyPy developer who worked in this area before (Numpy) or on the Python interpreter (Py3k). Based on these answers we will then select and announce developers to work on the proposals. The work will be started at the upcoming Gothenburg sprint. Cheers, Carl Friedrich [1] http://pypy.org/numpydonate.html [2] http://pypy.org/py3donate.html From stefan_ml at behnel.de Tue Oct 18 14:19:19 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 18 Oct 2011 14:19:19 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Michael Foord, 18.10.2011 11:44: > On 17 October 2011 18:20, David Cournapeau wrote: > On Mon, Oct 17, 2011 at 2:22 PM, Michael Foord wrote: >>> It seems odd to argue that extending numpy to pypy will be a net negative >>> for the community! Sure there are some difficulties involved, just as >> there >>> are difficulties with having multiple implementations in the first place, >>> but the benefits are much greater. >> >> The net negative would be the community split, with numpy losing some >> resources taken by numpy on pypy. This seems like a plausible >> situation. > > Note that this is *exactly* the same "negative" that Python itself faces > with multiple implementations. It has in fact been a great positive, > widening the community and improving Python (and yes sometimes improving it > by pointing out its problems). I think both of you are talking about two different scenarios here. One situation is where PyPy gains a NumPy compatible implementation (based on NumPy or not) and most code that runs on NumPy today can run on either CPython's NumPy or PyPy's NumPy. That may lead to the gains that you are talking about, because users can freely choose what suits their needs best, without getting into dependency hell. It may also eventually lead to changes in CPython's NumPy to adapt to requirements or improvements in PyPy's. Both could learn from each other and win. The other situation is where PyPy does its own thing and supports some NumPy code that happens to run faster than in CPython, while other code does not work at all, with the possibility to replace it in a PyPy specific way. That would mean that some people would write code for one platform that won't run on the other, and vice-versa, although it actually deals with the same kind of data. This strategy is sometimes referred to as "embrace, extend and extinguish". It would not be an improvement, not for CPython, likely not for PyPy either, and certainly not for the scientific Python community as a whole. Stefan From arigo at tunes.org Tue Oct 18 14:41:24 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 18 Oct 2011 14:41:24 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Hi, On Tue, Oct 18, 2011 at 14:19, Stefan Behnel wrote: > The other situation is where PyPy does its own thing and supports some NumPy > code that happens to run faster than in CPython, while other code does not > work at all, with the possibility to replace it in a PyPy specific way. I think you are disregarding what 8 years of the PyPy project should have made obvious. Yes, some code will not run at all on PyPy at first, and that amount of code is going to be reduced over time. But what we want is to avoid a community split, so we are never, ever, going to add and advertise PyPy-only ways to write programs. A bient?t, Armin. From ian at ianozsvald.com Tue Oct 18 15:05:20 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Tue, 18 Oct 2011 14:05:20 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > As an example - I want numpy for client work. For my clients (the main > being a physics company that is replacing Fortran with Python) numpy > is at the heart of their simulations. However - numpy is used with > matplotlib and pyCUDA and parts of scipy. If basic tools like FFT > aren't available *and compatible* (i.e. not new implementations but > running on tried, trusted and consistent C libs) then there'd be > little reason to use pypy+numpy. pyCUDA could be a longer term goal > but matplotlib would be essential. Hi David, Fijal. I'll reply to this earlier post as the overnight discussion doesn't seem to have a good place to add this. Someone else (I can't find a name) posted this nice summary: http://blog.streamitive.com/2011/10/17/numpy-isnt-about-fast-arrays/ which mostly echoes my position. Does anyone have a guestimate of the size of the active numpy user community minus the scipy/extensions community. I.e. the size of the community that might benefit from pypy-numpy (excluding those that use scipy etc who couldn't benefit for a [long] while)? At EuroSciPy it felt as though many people used numpy+scipy (noting that it was a scipy conference). At EuroPython there were a number of talks that used numpy but mostly they used other C or extension components (e.g. pyCUDA, Theano, visualisation tools). i. -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From benjamin at python.org Tue Oct 18 15:21:14 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 18 Oct 2011 09:21:14 -0400 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: <4E9D65C0.5090607@gmx.de> References: <4E9D65C0.5090607@gmx.de> Message-ID: 2011/10/18 Carl Friedrich Bolz : > Hi all, > > Now that we are getting in some money for our Py3k [1] and Numpy [2] > funding proposals (thank you very very much, for everybody who > contributed!) it is time to think more concretely about the actual > execution. > > Therefore I want to ask for PyPy developers that are interested in > getting paid for their work on the first steps of the Numpy or Py3k > proposals to step forward. To be applicable you need to be an > experienced PyPy developer who worked in this area before (Numpy) or on > the Python interpreter (Py3k). Please consider me to have stepped forward. :) -- Regards, Benjamin From alex.gaynor at gmail.com Tue Oct 18 15:22:37 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 18 Oct 2011 09:22:37 -0400 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: References: <4E9D65C0.5090607@gmx.de> Message-ID: On Tue, Oct 18, 2011 at 9:21 AM, Benjamin Peterson wrote: > 2011/10/18 Carl Friedrich Bolz : > > Hi all, > > > > Now that we are getting in some money for our Py3k [1] and Numpy [2] > > funding proposals (thank you very very much, for everybody who > > contributed!) it is time to think more concretely about the actual > > execution. > > > > Therefore I want to ask for PyPy developers that are interested in > > getting paid for their work on the first steps of the Numpy or Py3k > > proposals to step forward. To be applicable you need to be an > > experienced PyPy developer who worked in this area before (Numpy) or on > > the Python interpreter (Py3k). > > Please consider me to have stepped forward. :) > > > > -- > Regards, > Benjamin > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > I'm interested in the py3k work. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Oct 18 15:25:56 2011 From: holger at merlinux.eu (holger krekel) Date: Tue, 18 Oct 2011 13:25:56 +0000 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: References: <4E9D65C0.5090607@gmx.de> Message-ID: <20111018132556.GM27936@merlinux.eu> On Tue, Oct 18, 2011 at 09:21 -0400, Benjamin Peterson wrote: > 2011/10/18 Carl Friedrich Bolz : > > Hi all, > > > > Now that we are getting in some money for our Py3k [1] and Numpy [2] > > funding proposals (thank you very very much, for everybody who > > contributed!) it is time to think more concretely about the actual > > execution. > > > > Therefore I want to ask for PyPy developers that are interested in > > getting paid for their work on the first steps of the Numpy or Py3k > > proposals to step forward. To be applicable you need to be an > > experienced PyPy developer who worked in this area before (Numpy) or on > > the Python interpreter (Py3k). > > Please consider me to have stepped forward. :) for py3k, i assume. holger From benjamin at python.org Tue Oct 18 15:27:56 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 18 Oct 2011 09:27:56 -0400 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: <20111018132556.GM27936@merlinux.eu> References: <4E9D65C0.5090607@gmx.de> <20111018132556.GM27936@merlinux.eu> Message-ID: 2011/10/18 holger krekel : > On Tue, Oct 18, 2011 at 09:21 -0400, Benjamin Peterson wrote: >> 2011/10/18 Carl Friedrich Bolz : >> > Hi all, >> > >> > Now that we are getting in some money for our Py3k [1] and Numpy [2] >> > funding proposals (thank you very very much, for everybody who >> > contributed!) it is time to think more concretely about the actual >> > execution. >> > >> > Therefore I want to ask for PyPy developers that are interested in >> > getting paid for their work on the first steps of the Numpy or Py3k >> > proposals to step forward. To be applicable you need to be an >> > experienced PyPy developer who worked in this area before (Numpy) or on >> > the Python interpreter (Py3k). >> >> Please consider me to have stepped forward. :) > > for py3k, i assume. Indeed. -- Regards, Benjamin From anto.cuni at gmail.com Tue Oct 18 16:11:23 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 18 Oct 2011 16:11:23 +0200 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: <4E9D65C0.5090607@gmx.de> References: <4E9D65C0.5090607@gmx.de> Message-ID: <4E9D890B.4060609@gmail.com> On 18/10/11 13:40, Carl Friedrich Bolz wrote: > Hi all, > > Now that we are getting in some money for our Py3k [1] and Numpy [2] > funding proposals (thank you very very much, for everybody who > contributed!) it is time to think more concretely about the actual > execution. > > Therefore I want to ask for PyPy developers that are interested in > getting paid for their work on the first steps of the Numpy or Py3k > proposals to step forward. To be applicable you need to be an > experienced PyPy developer who worked in this area before (Numpy) or on > the Python interpreter (Py3k). I'd like to be considered to help implementing the Py3k proposal. ciao, Anto From fijall at gmail.com Tue Oct 18 18:09:19 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 18 Oct 2011 18:09:19 +0200 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: <4E9D890B.4060609@gmail.com> References: <4E9D65C0.5090607@gmx.de> <4E9D890B.4060609@gmail.com> Message-ID: On Tue, Oct 18, 2011 at 4:11 PM, Antonio Cuni wrote: > On 18/10/11 13:40, Carl Friedrich Bolz wrote: >> >> Hi all, >> >> Now that we are getting in some money for our Py3k [1] and Numpy [2] >> funding proposals (thank you very very much, for everybody who >> contributed!) it is time to think more concretely about the actual >> execution. >> >> Therefore I want to ask for PyPy developers that are interested in >> getting paid for their work on the first steps of the Numpy or Py3k >> proposals to step forward. To be applicable you need to be an >> experienced PyPy developer who worked in this area before (Numpy) or on >> the Python interpreter (Py3k). > > I'd like to be considered to help implementing the Py3k proposal. > > ciao, > Anto > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > I would like to be considered for both (with the slight preference for numpy). From jacob at openend.se Tue Oct 18 18:41:24 2011 From: jacob at openend.se (Jacob =?iso-8859-15?q?Hall=E9n?=) Date: Tue, 18 Oct 2011 18:41:24 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: <201110181841.31079.jacob@openend.se> Monday 17 October 2011 you wrote: > On 17 October 2011 16:42, Ian Ozsvald wrote: > > > For pypy I can't see any better approach than the way they have taken. > > > > Once > > > > > people are using numpy on pypy the limitations and missing parts will > > > > become > > > > > clear, and not only will the way forward be more obvious but there will > > > > be > > > > > more people involved to do the work. > > > > Michael - I agree that the PyPy community shouldn't do all the > > legwork! I agree also that the proposed path may spur more work (and > > maybe that's the best goal for now). > > > > I've gone back to the donations page: > > http://pypy.org/numpydonate.html > > to re-read the spec. What I get now (but didn't get before the > > discussion at Enthought Cambridge) is that "we don't plan to implement > > NumPy's C API" is a big deal (and not taking it on is entirely > > reasonable for this project!). > > > > In my mind (and maybe in the mind of some others who use scipy?) a > > base pypy+numpy project would easily open the door to matplotlib and > > all the other scipy goodies, it looks now like that isn't the case. > > Hence my questions to try to understand what else might be involved. > > Well, I think it definitely "opens the door" - certainly a lot more than > not doing the work! You have to start somewhere. > > It seems like other projects (like the pypy cython backend) will help make > other parts of project easier down the line. > > Back to Alex's question, how else would you *suggest* starting? Isn't a > core port of the central parts the obvious way to begin? > > Given the architecture of numpy it does seem that it opens up a whole bunch > of questions around numpy on multiple implementations. Certainly pypy > should be involved in the discussion here, but I don't think it is up to > pypy to find (or implement) the answers... I'd just like to note that the compelling reason for PyPy to develop numpy support is popular demand. We did a survey last spring, in which an overwhelming number of people asked for numpy support. This indicates that there is a large group of people who will be reap benefits from using PyPy plus Numpy, without specific support for scipy packages. Some of them may want to port their favourite scipy packages to work with PyPy. If the PyPy community decided that it was more important to keep the integrity of the Numpy community, we would hold these people back in order to prevent a fragmentation. I think we have to accept that some people have needs that are better served with what PyPy can provide today, while others will have to wait for theirs to be dealt with. This is the natural succession of technologies. From my perspective, PyPy based scientific computing makes sense. You get to write more of your code in a high level language, saving implementation time. If you agree with this, then the most sensible thing is to help make the transition as smooth as possible. Making it easy to integrate modules from FORTRAN, C++ and whatnot is part of such a task. Exactly what to do should be demand driven, and that is why PyPy doesn't have a plan or a timetable for these things. Like in all technology shifts, some of the old stuff will be easy to bring along, some will be hard and will be done anyway. The rest will fall to the wayside. If you believe the researchers are better served writing more code in low level languages and dealing with the issues of integrating their low level stuff with Python (in order to not to have to modify existing packages), then you will probably hope that PyPy is a fad that will die. In the very short term you are probably right. In the very short term it will be quicker to wade across a river rather than build a bridge. Jacob -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From santagada at gmail.com Tue Oct 18 19:23:14 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Tue, 18 Oct 2011 15:23:14 -0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Tue, Oct 18, 2011 at 11:05 AM, Ian Ozsvald wrote: >> As an example - I want numpy for client work. For my clients (the main >> being a physics company that is replacing Fortran with Python) numpy >> is at the heart of their simulations. However - numpy is used with >> matplotlib and pyCUDA and parts of scipy. If basic tools like FFT >> aren't available *and compatible* (i.e. not new implementations but >> running on tried, trusted and consistent C libs) then there'd be >> little reason to use pypy+numpy. pyCUDA could be a longer term goal >> but matplotlib would be essential. > > Hi David, Fijal. I'll reply to this earlier post as the overnight > discussion doesn't seem to have a good place to add this. > > Someone else (I can't find a name) posted this nice summary: > http://blog.streamitive.com/2011/10/17/numpy-isnt-about-fast-arrays/ > which mostly echoes my position. Yes and pypy numpy does support dtype IIUC so in the end it will have all the features of numpy described in the article, it is going to be one interface to all the libraries to talk to, but it is not going to be the same as cpython numpy. I don't think it is impossible to have an easy path for people to support both cpython numpy and pypy numpy on the same lib (either using cython or a simple C API). Maybe a easy to do is to make something like cpyext just for numpy api, and then latter agree on a common api for both, or to make cython to generate the correct one for each interpreter. -- Leonardo Santagada From arigo at tunes.org Tue Oct 18 21:02:22 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 18 Oct 2011 21:02:22 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Hi David, On Tue, Oct 18, 2011 at 18:29, David Cournapeau wrote: >>> (...) with the possibility to replace it in a PyPy specific way. >> >> I think you are disregarding what 8 years of the PyPy project should >> have made obvious. (...) > > Ok. In that case, it is fair to say that you are talking about a full > reimplementation of the whole scipy ecosystem, at least as much as > pypy itself is a reimplementation of python ?? I think the original topic of this discussion is numpy, not scipy. The answer is that I don't know. I am sure that people will reimplement whatever module is needed, or design a generic but slower way to interface with C a la cpyext, or write a different C API, or rely on Cython versions of their libraries and have Cython support in PyPy... or more likely all of these approaches and more. The point is that right now we are focusing on numpy only, and we want to make existing pure Python numpy programs run fast --- not just run horribly slowly --- both in the case of "standard" numpy programs, and in the case of programs that do not strictly follow the mold of "take your algorithm, then shuffle it and rewrite it and possibly obfuscate it until it is expressed as matrix operations with no computation left in pure Python". This is the first step for us right now. It will take some time before we have to even consider running scipy programs. By then I imagine that either the approach works and delivers good performance --- and then people (us and others) will have to consider the next steps to build on top of that --- or it just doesn't (which looks unlikely given the good preliminary results, which is why we can ask for support via donations). We did not draw precise plans for what comes next. I think the above would already be a very useful result for some users. But to me, it looks like a strong enough pull to motivate some more people to do the next steps --- Cython, C API, rewrite of some modules, and so on, including the perfectly fine opinion "in my case pypy is not giving enough benefits for me to care". Note that this is roughly the same issues and same solution spaces as the ones that exist in any domain with PyPy, not just numpy/scipy. A bient?t, Armin. From stefan_ml at behnel.de Wed Oct 19 08:07:12 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 19 Oct 2011 08:07:12 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: Leonardo Santagada, 18.10.2011 19:23: > pypy numpy does support dtype IIUC so in the end it will have > all the features of numpy described in the article, it is going to be > one interface to all the libraries to talk to, but it is not going to > be the same as cpython numpy. I don't think it is impossible to have > an easy path for people to support both cpython numpy and pypy numpy > on the same lib (either using cython or a simple C API). Maybe a easy > to do is to make something like cpyext just for numpy api, and then > latter agree on a common api for both, or to make cython to generate > the correct one for each interpreter. Basically, all that Cython does (at least for recent versions of NumPy), is to generate C level access code through the PEP 3118 buffer API. I don't know if that (or something like it) is available in PyPy. Even if not, it may not be hard to emulate at a ctypes-like level (it requires C data types for correct access to the array fields). Stefan From stefan_ml at behnel.de Wed Oct 19 09:02:17 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 19 Oct 2011 09:02:17 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <201110181841.31079.jacob@openend.se> References: <201110181841.31079.jacob@openend.se> Message-ID: Jacob Hall?n, 18.10.2011 18:41: > I'd just like to note that the compelling reason for PyPy to develop numpy > support is popular demand. We did a survey last spring, in which an > overwhelming number of people asked for numpy support. This indicates that > there is a large group of people who will be reap benefits from using PyPy > plus Numpy, without specific support for scipy packages. Depends on what the question was. Many people say "NumPy", and when you ask back, you find out that they actually meant "SciPy" or at least "NumPy and parts x, y and z of its ecosystem that I commonly use, oh, and I forgot about abc as well, and ...". NumPy itself is just the most visible pile in a fairly vast landscape. > From my perspective, PyPy based scientific computing makes sense. You get to > write more of your code in a high level language, saving implementation time. > If you agree with this, then the most sensible thing is to help make the > transition as smooth as possible. Making it easy to integrate modules from > FORTRAN, C++ and whatnot is part of such a task. Exactly what to do should be > demand driven, and that is why PyPy doesn't have a plan or a timetable for > these things. Like in all technology shifts, some of the old stuff will be > easy to bring along, some will be hard and will be done anyway. The rest will > fall to the wayside. I don't think anyone here speaks against PyPy integrating with the scientific world and all of its existing achievements in terms of code. However, *that* is the right direction. It's PyPy that needs to integrate. Integration with (C)Python is already there, for tons of tools and in manyfold ways. Suggesting that people throw that away, that they restart from scratch and maintain a separate set of integration code in parallel, just to use yet another Python implementation, is asking for a huge waste of time. > If you believe the researchers are better served writing more code in low > level languages and dealing with the issues of integrating their low level > stuff with Python (in order to not to have to modify existing packages), then > you will probably hope that PyPy is a fad that will die. In the very short > term you are probably right. In the very short term it will be quicker to wade > across a river rather than build a bridge. Sorry to get you wrong, but that smells a bit too much like a "PyPy will generate the fastest code on earth, so you won't need anything else" ad, which is not supported by any facts I know of. I'm yet to see PyPy compete with FFTW, just as an example. Researchers *are* better served by integrating their own and other people's "low-level stuff", than by writing it all over again. They want to do research, not programming. And there will always be points where they resort to a low-level language, be it because of specific performance requirements or because they need to integrate it with something else than Python, be it in the form of PyPy or CPython. Remember that C is many times more ubiquitous than the entire set of Python implementations taken together. That won't change. Stefan From cournape at gmail.com Tue Oct 18 18:29:12 2011 From: cournape at gmail.com (David Cournapeau) Date: Tue, 18 Oct 2011 17:29:12 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Tue, Oct 18, 2011 at 1:41 PM, Armin Rigo wrote: > Hi, > > On Tue, Oct 18, 2011 at 14:19, Stefan Behnel wrote: >> The other situation is where PyPy does its own thing and supports some NumPy >> code that happens to run faster than in CPython, while other code does not >> work at all, with the possibility to replace it in a PyPy specific way. > > I think you are disregarding what 8 years of the PyPy project should > have made obvious. ?Yes, some code will not run at all on PyPy at > first, and that amount of code is going to be reduced over time. ?But > what we want is to avoid a community split, so we are never, ever, > going to add and advertise PyPy-only ways to write programs. Ok. In that case, it is fair to say that you are talking about a full reimplementation of the whole scipy ecosystem, at least as much as pypy itself is a reimplementation of python ? (since none of the existing ecosystem will work without the C numpy API). Sorry for being dense, just want to make sure I am not misrepresenting your approach, David From cournape at gmail.com Tue Oct 18 21:33:00 2011 From: cournape at gmail.com (David Cournapeau) Date: Tue, 18 Oct 2011 20:33:00 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: On Tue, Oct 18, 2011 at 8:02 PM, Armin Rigo wrote: > Hi David, > > On Tue, Oct 18, 2011 at 18:29, David Cournapeau wrote: >>>> (...) with the possibility to replace it in a PyPy specific way. >>> >>> I think you are disregarding what 8 years of the PyPy project should >>> have made obvious. (...) >> >> Ok. In that case, it is fair to say that you are talking about a full >> reimplementation of the whole scipy ecosystem, at least as much as >> pypy itself is a reimplementation of python ?? > > I think the original topic of this discussion is numpy, not scipy. > The answer is that I don't know. ?I am sure that people will > reimplement whatever module is needed, or design a generic but slower > way to interface with C a la cpyext, or write a different C API, or > rely on Cython versions of their libraries and have Cython support in > PyPy... or more likely all of these approaches and more. > > The point is that right now we are focusing on numpy only, and we want > to make existing pure Python numpy programs run fast --- not just run > horribly slowly --- ?both in the case of "standard" numpy programs, > and in the case of programs that do not strictly follow the mold of > "take your algorithm, then shuffle it and rewrite it and possibly > obfuscate it until it is expressed as matrix operations with no > computation left in pure Python". > > This is the first step for us right now. ?It will take some time > before we have to even consider running scipy programs. ?By then I > imagine that either the approach works and delivers good performance > --- and then people (us and others) will have to consider the next > steps to build on top of that --- or it just doesn't (which looks > unlikely given the good preliminary results, which is why we can ask > for support via donations). > > We did not draw precise plans for what comes next. ?I think the above > would already be a very useful result for some users. ?But to me, it > looks like a strong enough pull to motivate some more people to do the > next steps --- Cython, C API, rewrite of some modules, and so on, > including the perfectly fine opinion "in my case pypy is not giving > enough benefits for me to care". ?Note that this is roughly the same > issues and same solution spaces as the ones that exist in any domain > with PyPy, not just numpy/scipy. Thank you for the clear explanation, Armin, that makes things much clearer, at least to me. cheers, David From garyrob at me.com Wed Oct 19 12:35:47 2011 From: garyrob at me.com (Gary Robinson) Date: Wed, 19 Oct 2011 06:35:47 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> > Jacob Hall?n, 18.10.2011 18:41: >> I'd just like to note that the compelling reason for PyPy to develop numpy >> support is popular demand. We did a survey last spring, in which an >> overwhelming number of people asked for numpy support. This indicates that >> there is a large group of people who will be reap benefits from using PyPy >> plus Numpy, without specific support for scipy packages. > > Depends on what the question was. Many people say "NumPy", and when you ask > back, you find out that they actually meant "SciPy" or at least "NumPy and > parts x, y and z of its ecosystem that I commonly use? I was one of the people who responded to that poll, and I have to say that I fall into the category "they actually meant 'SciPy'?". I assumed that there would be an interface to numpy that would also support scipy. SciPy has a lot of packages that run various things like SVD very, efficiently because it does them in C. I need access to those packages. I also write my own algorithms. For those, I want to benefit from PyPy's speed and don't necessarily want to make the algorithms fit into numpy's array-processing approach. So, I NEED SciPy, and would like to also have PyPy, and I'd like to use them together rather than having to separate everything into separate scripts, some of which use CPython/SciPy and some of which use PyPy. In fact, my current code doesn't need NumPy at all except as the way to get to SciPy. So, I have to say, I am unhappy with the current PyPy approach to NumPy. I'd rather see a much slower NumPy/PyPy integration if that meant being able to use SciPy seamlessly with PyPy. -- Gary Robinson CTO Emergent Discovery, LLC personal email: garyrob at me.com work email: grobinson at emergentdiscovery.com Company: http://www.emergentdiscovery.com Blog: http://www.garyrobinson.net From anto.cuni at gmail.com Wed Oct 19 13:42:18 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 19 Oct 2011 13:42:18 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: <4E9EB79A.7060901@gmail.com> Hi Gary, On 19/10/11 12:35, Gary Robinson wrote: > So, I have to say, I am unhappy with the current PyPy approach to NumPy. I'd rather see a much slower NumPy/PyPy integration if that meant being able to use SciPy seamlessly with PyPy. I'm not sure to interpret your sentence correctly. Are you saying that you would still want a pypy+numpy+scipy, even if it ran things slower than CPython? May I ask why? ciao, Anto From bokr at oz.net Wed Oct 19 14:06:44 2011 From: bokr at oz.net (Bengt Richter) Date: Wed, 19 Oct 2011 14:06:44 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: <4E9EBD54.9090909@oz.net> On 10/18/2011 02:41 PM Armin Rigo wrote: > Hi, > > On Tue, Oct 18, 2011 at 14:19, Stefan Behnel wrote: >> The other situation is where PyPy does its own thing and supports some NumPy >> code that happens to run faster than in CPython, while other code does not >> work at all, with the possibility to replace it in a PyPy specific way. > > I think you are disregarding what 8 years of the PyPy project should > have made obvious. Yes, some code will not run at all on PyPy at > first, and that amount of code is going to be reduced over time. But > what we want is to avoid a community split, so we are never, ever, > going to add and advertise PyPy-only ways to write programs. > > > A bient?t, > > Armin. Just the same, I think PyPy could be allowed to have an "import that" ;-) I think I read somewhere that PyPy's ambitions were not just to serve official Python with fast implementation, but possibly other language development too. Is that still true? If one wanted to use PyPy's great infrastructure to implement a new language, what would be the pypythonic bootstrapping path towards a self-hosting new language? BTW, would self-hosting be seen as a disruptive forking goal? If the new language were just a tweak on Python, what would be the attitude towards starting with a source-source preprocessor followed by invoking pypy on the result? Just as an example (without looking at grammar problems for the moment), what if I wanted to transform just assignments such that x:=expr meant x=expr as usual, except produced source mods to tell pypy that subsequently it could assume that x had unchanging type (which it might be able to infer anyway in most cases, but it would also make programmer intent human-perceptible). In similar vein, x::=expr could mean x's value (and type) would be frozen after the first evaluation. This raises the question of how would I best tell pypy this meta-information about x with legal manual edits now? Assertions? Could you see pypy accepting command-line options with information about specific variables, functions, modules, etc.? (such options could of course be collected in a file referenced from a command line option). It is then a short step to create some linkage between the source files and the meta-data files, e.g. by file name extensions like .py and .pyc for the same file. Maybe .pym? If so pypy could look for .pym the way it looks for .pyc at the appropriate time. Going further, what about preprocessing with a statement decorator using e.g., @@decomodule.fun statement # with suite to mean the preprocessor at read time should import decomodule in a special environment and pass the statement (with suite) to decomodule.fun for source transformation with return of source to be substituted. BTW, during the course of preprocessing, the "special environment" for importing statement-decorating modules could persist, and state from early decorator calls could affect subsequent ones. Well, this would be Python with macros in a sense, so it would presumably be too disruptive to be considered for any pythonic Python. OTOH, it might be an interesting path for some variations to converge under one sort-of-pythonic (given the python decorator as precedent) source-transformation markup methodology. (I'm just looking for reaction to the general idea, not specific syntax problems). Regards, Bengt Richter From anto.cuni at gmail.com Wed Oct 19 13:57:01 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 19 Oct 2011 13:57:01 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <4E9EB79A.7060901@gmail.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> Message-ID: <4E9EBB0D.6060805@gmail.com> On 19/10/11 13:42, Antonio Cuni wrote: > I'm not sure to interpret your sentence correctly. > Are you saying that you would still want a pypy+numpy+scipy, even if it ran > things slower than CPython? May I ask why? ah sorry, I think I misunderstood your email. You would like pypy+numpy+scipy so that you could write fast python-only algorithms and still use the existing libraries. I suppose this is a perfectly reasonable usecase, and indeed the current plan does not focus on this. However, I'd like to underline that to write "fast python-only algorithms", you most probably still need a fast numpy in the way it is written right now (unless you want to write your algorithms without using numpy at all). If we went to the slow-but-scipy-compatible approach, any pure python algorithm which interfaces with numpy arrays would be terribly slow. ciao, Anto From bea at changemaker.nu Wed Oct 19 14:36:15 2011 From: bea at changemaker.nu (Bea During) Date: Wed, 19 Oct 2011 14:36:15 +0200 Subject: [pypy-dev] Success histories needed In-Reply-To: References: <1774912.BUcLc4JMg6@hunter-laptop.tontut.fi> Message-ID: <4E9EC43F.5030807@changemaker.nu> Hi there Maciej Fijalkowski skrev 2011-10-17 10:30: > On Mon, Oct 17, 2011 at 10:17 AM, Alex Pyattaev wrote: >> I have a fully-functional wireless network simulation tool written in >> pypy+swig. Is that nice? Have couple papers to refer to as well. If you want I >> could write a small abstract on how it was so great to use pypy (which, in >> fact, was really great)? > Please :) As Maciej says - please do ;-) And I think we may want to be able to publish your abstract on our blog if that would be possible? Cheers Bea > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > > > -- > > CronLab scanned this message. We don't think it was spam. If it was, > please report by copying this link into your browser: http://cronlab01.terratel.se/mail/index.php?id=A74652E2162.6D318-&learn=spam&host=212.91.134.155 > From p.j.a.cock at googlemail.com Wed Oct 19 14:37:01 2011 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Wed, 19 Oct 2011 13:37:01 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <4E9EBB0D.6060805@gmail.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> Message-ID: On Wed, Oct 19, 2011 at 12:57 PM, Antonio Cuni wrote: > On 19/10/11 13:42, Antonio Cuni wrote: > >> I'm not sure to interpret your sentence correctly. >> Are you saying that you would still want a pypy+numpy+scipy, >> even if it ran things slower than CPython? May I ask why? > > ah sorry, I think I misunderstood your email. > > You would like pypy+numpy+scipy so that you could write fast > python-only algorithms and still use the existing libraries. ?I > suppose this is a perfectly reasonable usecase, and indeed > the current plan does not focus on this. I want this too - well actually pypy+numpy+xxx where xxx uses bits of the numpy C API. I don't care if the numpy bits are a *bit* slower under PyPy than C Python - 100% compatibility is more important to me. > However, I'd like to underline that to write "fast python-only > algorithms", you most probably still need a fast numpy in the > way it is written right now (unless you want to write your > algorithms without using numpy at all). ?If we went to the > slow-but-scipy-compatible approach, any pure python > algorithm which interfaces with numpy arrays would be > terribly slow. I'd be happy with "close to numpy under C Python" speeds for my code using numpy under PyPy, with fast python-only bits. That covers quite a lot of use cases I would think, but if we'd get "terribly slow" for the numpy using bits that is less tempting. Depending on your value of terrible ;) Right now the PyPy micronumpy is far too limited to be of real use even where I'm using only the Python interface. e.g. there is no numpy.linalg module: https://bugs.pypy.org/issue915 Peter From arigo at tunes.org Wed Oct 19 14:38:56 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 19 Oct 2011 14:38:56 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <4E9EBD54.9090909@oz.net> References: <4E9EBD54.9090909@oz.net> Message-ID: Hi Bengt, PyPy is indeed supporting multiple _existing_ languages (with SWI Prolog being the 2nd quasi-complete language right now). However, most of us are not interested in exploratory language design, say in the form of syntax tweaks to Python. You are welcome to fork PyPy's bitbucket repository and hack there, but you will have more interested answers if you move this discussion somewhere more appropriate (like the python-ideas mailing list). A bient?t, Armin. From wesmckinn at gmail.com Wed Oct 19 15:32:32 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 19 Oct 2011 09:32:32 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: On Wed, Oct 19, 2011 at 6:35 AM, Gary Robinson wrote: >> Jacob Hall?n, 18.10.2011 18:41: >>> I'd just like to note that the compelling reason for PyPy to develop numpy >>> support is popular demand. We did a survey last spring, in which an >>> overwhelming number of people asked for numpy support. This indicates that >>> there is a large group of people who will be reap benefits from using PyPy >>> plus Numpy, without specific support for scipy packages. >> >> Depends on what the question was. Many people say "NumPy", and when you ask >> back, you find out that they actually meant "SciPy" or at least "NumPy and >> parts x, y and z of its ecosystem that I commonly use? > > I was one of the people who responded to that poll, and I have to say that I fall into the category "they actually meant 'SciPy'?". ?I assumed that there would be an interface to numpy that would also support scipy. SciPy has a lot of packages that run various things like SVD very, efficiently because it does them in C. I need access to those packages. I also write my own algorithms. For those, I want to benefit from PyPy's speed and don't necessarily want to make the algorithms fit into numpy's array-processing approach. > I suspect most of the poll respondents see NumPy as representing a lot more than just a fast ndarray-- i.e. that being able to type import numpy is the key to being able to tap the whole scientific Python ecosystem. I cannot be certain, but the I am willing to bet the percentage of people who use NumPy and NumPy only relative to the rest of the scientific Python community is very small. > So, I NEED SciPy, and would like to also have PyPy, and I'd like to use them together rather than having to separate everything into separate scripts, some of which use CPython/SciPy and some of which use PyPy. In fact, my current code doesn't need NumPy at all except as the way to get to SciPy. > > So, I have to say, I am unhappy with the current PyPy approach to NumPy. I'd rather see a much slower NumPy/PyPy integration if that meant being able to use SciPy seamlessly with PyPy. > > > -- > > Gary Robinson > CTO > Emergent Discovery, LLC > personal email: garyrob at me.com > work email: grobinson at emergentdiscovery.com > Company: http://www.emergentdiscovery.com > Blog: ? ?http://www.garyrobinson.net > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From bokr at oz.net Wed Oct 19 15:54:25 2011 From: bokr at oz.net (Bengt Richter) Date: Wed, 19 Oct 2011 15:54:25 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <4E9EBD54.9090909@oz.net> Message-ID: <4E9ED691.3040005@oz.net> On 10/19/2011 02:38 PM Armin Rigo wrote: > Hi Bengt, > > PyPy is indeed supporting multiple _existing_ languages (with SWI > Prolog being the 2nd quasi-complete language right now). However, > most of us are not interested in exploratory language design, say in > the form of syntax tweaks to Python. > > You are welcome to fork PyPy's bitbucket repository and hack there, > but you will have more interested answers if you move this discussion > somewhere more appropriate (like the python-ideas mailing list). > > > A bient?t, > > Armin. Thank you. Regards, Bengt Richter From anto.cuni at gmail.com Wed Oct 19 16:27:45 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 19 Oct 2011 16:27:45 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> Message-ID: <4E9EDE61.9030603@gmail.com> Hello Gary, On 19/10/11 15:38, Gary Robinson wrote: >>> You would like pypy+numpy+scipy so that you could write fast >>> python-only algorithms and still use the existing libraries. I >>> suppose this is a perfectly reasonable usecase, and indeed >>> the current plan does not focus on this. >> > > Yes. That is exactly what I want. [cut] thank you for the input: indeed, I agree that for your usecase the current plan is not the best. OTOH, there is probably someone else for which the current plan is better than others, we cannot make everyone happy at the same time, although we might do it eventually :-). By the way, did you ever considered the possibility of running pypy and cpython side-by-side? You do your pure-python computation on pypy, then you pipe them (e.g. by using execnet) to a cpython process which does the processing using scipy. Depending on how big the data is, the overhead of passing the data around should not be too high It's not ideal, but it might be worth of being tried. ciao, Anto From garyrob at me.com Wed Oct 19 16:38:30 2011 From: garyrob at me.com (Gary Robinson) Date: Wed, 19 Oct 2011 10:38:30 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <4E9EDE61.9030603@gmail.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> <4E9EDE61.9030603@gmail.com> Message-ID: > By the way, did you ever considered the possibility of running pypy and cpython side-by-side? > You do your pure-python computation on pypy, then you pipe them (e.g. by using execnet) to a cpython process which does the processing using scipy. Depending on how big the data is, the overhead of passing the data around should not be too high . Absolutely -- I've thought about that general approach though this is the first time I recall hearing about execnet. Of course I'm concerned that the overhead would be too much in some cases, such as huge numbers of calls to scipy.stats.stats.chisqprob. Such overhead seems like it might cancel all the benefit of PyPy, depending on the script. But maybe it's not as much overhead as I fear. For example I see that execnet does not do pickling. Hm. -- Gary Robinson CTO Emergent Discovery, LLC personal email: garyrob at me.com work email: grobinson at emergentdiscovery.com Company: http://www.emergentdiscovery.com Blog: http://www.garyrobinson.net On Oct 19, 2011, at 10:27 AM, Antonio Cuni wrote: > Hello Gary, > > On 19/10/11 15:38, Gary Robinson wrote: >>>> You would like pypy+numpy+scipy so that you could write fast >>>> python-only algorithms and still use the existing libraries. I >>>> suppose this is a perfectly reasonable usecase, and indeed >>>> the current plan does not focus on this. >>> >> >> Yes. That is exactly what I want. > [cut] > > thank you for the input: indeed, I agree that for your usecase the current plan is not the best. OTOH, there is probably someone else for which the current plan is better than others, we cannot make everyone happy at the same time, although we might do it eventually :-). > > By the way, did you ever considered the possibility of running pypy and cpython side-by-side? > You do your pure-python computation on pypy, then you pipe them (e.g. by using execnet) to a cpython process which does the processing using scipy. Depending on how big the data is, the overhead of passing the data around should not be too high > > It's not ideal, but it might be worth of being tried. > > ciao, > Anto From garyrob at me.com Wed Oct 19 15:38:22 2011 From: garyrob at me.com (Gary Robinson) Date: Wed, 19 Oct 2011 09:38:22 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <4E9EBB0D.6060805@gmail.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> Message-ID: <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> >> You would like pypy+numpy+scipy so that you could write fast >> python-only algorithms and still use the existing libraries. I >> suppose this is a perfectly reasonable usecase, and indeed >> the current plan does not focus on this. > Yes. That is exactly what I want. > However, I'd like to underline that to write "fast python-only algorithms", you most probably still need a fast numpy in the way it is written right now (unless you want to write your algorithms without using numpy at all) I make very little use of numpy itself other than as the way to use scipy; I tend to write python-only algorithms that don't use numpy. As Peter Cock says in his own reply, a little bit of slowdown in regular numpy use compared to CPython would be fine, though a LOT of slowdown could be a problem. Now, I'm not saying I'm typical. I have no idea how typical I am, though it sounds like Peter Cock is in a similar boat. I'm sure I'd benefit from doing more with numpy. But I simply cannot do without scipy, or accessing equivalent functionality by using R or another package. I'd much rather use scipy and see its capabilities grow than use R. >From my own bias, I'd assume that what would benefit the scientific community most is scipy integration first, and a faster numpy second. Scipy simply provides too many tools that are absolutely essential. The project for providing a common interface to IronPython, etc. sounded extremely promising in that regard -- it makes enormous sense to me that all different versions of python should have a way to access scipy, even if custom code that uses numpy is a little bit slower. My main concern is that the glue to frequently-called scipy functions such as scipy.stats.stats.chisqprob wouldn't be so much slower that my overall script isn't benefiting from PyPy. Obviously, I understand that this is an open-source project and people develop what they are interested in. I'm just giving my individual perspective, for whatever it may be worth. -- Gary Robinson CTO Emergent Discovery, LLC personal email: garyrob at me.com work email: grobinson at emergentdiscovery.com Company: http://www.emergentdiscovery.com Blog: http://www.garyrobinson.net On Oct 19, 2011, at 7:57 AM, Antonio Cuni wrote: > On 19/10/11 13:42, Antonio Cuni wrote: > >> I'm not sure to interpret your sentence correctly. >> Are you saying that you would still want a pypy+numpy+scipy, even if it ran >> things slower than CPython? May I ask why? > > ah sorry, I think I misunderstood your email. > > You would like pypy+numpy+scipy so that you could write fast python-only algorithms and still use the existing libraries. I suppose this is a perfectly reasonable usecase, and indeed the current plan does not focus on this. > > However, I'd like to underline that to write "fast python-only algorithms", you most probably still need a fast numpy in the way it is written right now (unless you want to write your algorithms without using numpy at all). If we went to the slow-but-scipy-compatible approach, any pure python algorithm which interfaces with numpy arrays would be terribly slow. > > ciao, > Anto From chris.felton at gmail.com Wed Oct 19 15:49:57 2011 From: chris.felton at gmail.com (Christopher Felton) Date: Wed, 19 Oct 2011 08:49:57 -0500 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: On 10/19/2011 8:32 AM, Wes McKinney wrote: > On Wed, Oct 19, 2011 at 6:35 AM, Gary Robinson wrote: >>> Jacob Hall?n, 18.10.2011 18:41: >>>> I'd just like to note that the compelling reason for PyPy to develop numpy >>>> support is popular demand. We did a survey last spring, in which an >>>> overwhelming number of people asked for numpy support. This indicates that >>>> there is a large group of people who will be reap benefits from using PyPy >>>> plus Numpy, without specific support for scipy packages. >>> >>> Depends on what the question was. Many people say "NumPy", and when you ask >>> back, you find out that they actually meant "SciPy" or at least "NumPy and >>> parts x, y and z of its ecosystem that I commonly use? >> >> I was one of the people who responded to that poll, and I have to say that I fall into the category "they actually meant 'SciPy'?". I assumed that there would be an interface to numpy that would also support scipy. SciPy has a lot of packages that run various things like SVD very, efficiently because it does them in C. I need access to those packages. I also write my own algorithms. For those, I want to benefit from PyPy's speed and don't necessarily want to make the algorithms fit into numpy's array-processing approach. >> > > I suspect most of the poll respondents see NumPy as representing a lot > more than just a fast ndarray-- i.e. that being able to type > > import numpy > Or a misunderstanding how much the other packages rely on low-level functions. There might have been a perception that for a good chunk of the supporting libraries required only the numpy API (Python side API). My guess, there has to be some packages that only require numpy on the Python side and none of the foreign interfaces. I am surprised that MPL is mentioned requiring the low-level interfaces and not simply the num arrays API. If the pypy team is successful with numpypy, I would guess some packages would work (naive guess?) with minimal modification? My preference would be to have full support (numpy/scipy/matplotlib) but because of the success of pypy on MyHDL I would be happy (extremely happy) with fast running numpy without scipy. Regards Chris From jake.biesinger at gmail.com Wed Oct 19 17:21:18 2011 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Wed, 19 Oct 2011 08:21:18 -0700 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: Message-ID: > > I think the original topic of this discussion is numpy, not scipy. > The answer is that I don't know. I am sure that people will > reimplement whatever module is needed, or design a generic but slower > way to interface with C a la cpyext, or write a different C API, or > rely on Cython versions of their libraries and have Cython support in > PyPy... or more likely all of these approaches and more. > Great discussion... Any idea how "micro" the micronumpy implementation will be? Numpy includes matrix multiplication, eigenvalue decomposition, histogramming, etc. For all the people meaning Scipy when they get excited for a Numpy implementation, their feature of choice may be included in Numpy after all. -------------- next part -------------- An HTML attachment was scrubbed... URL: From garyrob at me.com Wed Oct 19 16:49:37 2011 From: garyrob at me.com (Gary Robinson) Date: Wed, 19 Oct 2011 10:49:37 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <4E9EDE61.9030603@gmail.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> <4E9EDE61.9030603@gmail.com> Message-ID: I wonder if it would be worthwhile to have another poll, this time clearly differentiating between a) focusing on integrating the existing numpy in such a way that scipy and other such packages are also enabled, probably using the existing project to provide a C interface that IronPython and other Python variants can use; or b) the current path of replacing much of numpy, making it much faster but leaving scipy out in the cold for quite some time. I don't think it's clear, at this point, which approach would generate more monetary contributions. I suspect it might be (a) because of commercial scientific research that depends on scipy. Of course, if the path decision is already firm, then such a poll would be moot. -- Gary Robinson CTO Emergent Discovery, LLC personal email: garyrob at me.com work email: grobinson at emergentdiscovery.com Company: http://www.emergentdiscovery.com Blog: http://www.garyrobinson.net On Oct 19, 2011, at 10:27 AM, Antonio Cuni wrote: > Hello Gary, > > On 19/10/11 15:38, Gary Robinson wrote: >>>> You would like pypy+numpy+scipy so that you could write fast >>>> python-only algorithms and still use the existing libraries. I >>>> suppose this is a perfectly reasonable usecase, and indeed >>>> the current plan does not focus on this. >>> >> >> Yes. That is exactly what I want. > [cut] > > thank you for the input: indeed, I agree that for your usecase the current plan is not the best. OTOH, there is probably someone else for which the current plan is better than others, we cannot make everyone happy at the same time, although we might do it eventually :-). > > By the way, did you ever considered the possibility of running pypy and cpython side-by-side? > You do your pure-python computation on pypy, then you pipe them (e.g. by using execnet) to a cpython process which does the processing using scipy. Depending on how big the data is, the overhead of passing the data around should not be too high > > It's not ideal, but it might be worth of being tried. > > ciao, > Anto From fijall at gmail.com Wed Oct 19 19:25:11 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 19 Oct 2011 19:25:11 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> <4E9EDE61.9030603@gmail.com> Message-ID: On Wed, Oct 19, 2011 at 4:49 PM, Gary Robinson wrote: > I wonder if it would be worthwhile to have another poll, this time clearly differentiating between > > a) focusing on integrating the existing numpy in such a way that scipy and other such packages are also enabled, probably using the existing project to provide a C interface that IronPython and other Python variants can use; or > > b) the current path of replacing much of numpy, making it much faster but leaving scipy out in the cold for quite some time. > > I don't think it's clear, at this point, which approach would generate more monetary contributions. I suspect it might be (a) because of commercial scientific research that depends on scipy. Of course, if the path decision is already firm, then such a poll would be moot. > It's however clear which approach is harder and more painful. I for one don't subscribe for emulating all the subtleties of CPython C API nor numpy API. From p.j.a.cock at googlemail.com Wed Oct 19 19:36:27 2011 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Wed, 19 Oct 2011 18:36:27 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> <4E9EDE61.9030603@gmail.com> Message-ID: On Wed, Oct 19, 2011 at 6:25 PM, Maciej Fijalkowski wrote: > On Wed, Oct 19, 2011 at 4:49 PM, Gary Robinson wrote: >> I wonder if it would be worthwhile to have another poll, this time >> clearly differentiating between >> >> a) focusing on integrating the existing numpy in such a way that >> scipy and other such packages are also enabled, probably using >> the existing project to provide a C interface that IronPython and >> other Python variants can use; or >> >> b) the current path of replacing much of numpy, making it much >> faster but leaving scipy out in the cold for quite some time. >> >> I don't think it's clear, at this point, which approach would generate >> more monetary contributions. I suspect it might be (a) because of >> commercial scientific research that depends on scipy. Of course, >> if the path decision is already firm, then such a poll would be moot. >> > > It's however clear which approach is harder and more painful. I for > one don't subscribe for emulating all the subtleties of CPython C API > nor numpy API. If you do that, you are not porting numpy, and the current code-name of micronumpy is quite appropriate ;) I would be much more interested in (a), since as I understand it (b) would only cater to libraries using just numpy's python interface (and even there PyPy still has a lot of work to do). Peter From holger at merlinux.eu Wed Oct 19 19:59:47 2011 From: holger at merlinux.eu (holger krekel) Date: Wed, 19 Oct 2011 17:59:47 +0000 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> <4E9EDE61.9030603@gmail.com> Message-ID: <20111019175947.GQ27936@merlinux.eu> On Wed, Oct 19, 2011 at 10:38 -0400, Gary Robinson wrote: > > By the way, did you ever considered the possibility of running pypy and cpython side-by-side? > > You do your pure-python computation on pypy, then you pipe them (e.g. by using execnet) to a cpython process which does the processing using scipy. Depending on how big the data is, the overhead of passing the data around should not be too high > . > Absolutely -- I've thought about that general approach though this is the first time I recall hearing about execnet. Of course I'm concerned that the overhead would be too much in some cases, such as huge numbers of calls to scipy.stats.stats.chisqprob. Such overhead seems like it might cancel all the benefit of PyPy, depending on the script. But maybe it's not as much overhead as I fear. For example I see that execnet does not do pickling. Hm. You can add pickling on top of execnet by using dumps/loads and sending the bytes which is fast. I recommend to use pickling with great care and not everywhere though. holger > -- > > Gary Robinson > CTO > Emergent Discovery, LLC > personal email: garyrob at me.com > work email: grobinson at emergentdiscovery.com > Company: http://www.emergentdiscovery.com > Blog: http://www.garyrobinson.net > > > > > On Oct 19, 2011, at 10:27 AM, Antonio Cuni wrote: > > > Hello Gary, > > > > On 19/10/11 15:38, Gary Robinson wrote: > >>>> You would like pypy+numpy+scipy so that you could write fast > >>>> python-only algorithms and still use the existing libraries. I > >>>> suppose this is a perfectly reasonable usecase, and indeed > >>>> the current plan does not focus on this. > >>> > >> > >> Yes. That is exactly what I want. > > [cut] > > > > thank you for the input: indeed, I agree that for your usecase the current plan is not the best. OTOH, there is probably someone else for which the current plan is better than others, we cannot make everyone happy at the same time, although we might do it eventually :-). > > > > By the way, did you ever considered the possibility of running pypy and cpython side-by-side? > > You do your pure-python computation on pypy, then you pipe them (e.g. by using execnet) to a cpython process which does the processing using scipy. Depending on how big the data is, the overhead of passing the data around should not be too high > > > > It's not ideal, but it might be worth of being tried. > > > > ciao, > > Anto > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From santagada at gmail.com Wed Oct 19 19:47:39 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Wed, 19 Oct 2011 15:47:39 -0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> <4E9EDE61.9030603@gmail.com> Message-ID: On Wed, Oct 19, 2011 at 3:36 PM, Peter Cock wrote: > On Wed, Oct 19, 2011 at 6:25 PM, Maciej Fijalkowski wrote: >> On Wed, Oct 19, 2011 at 4:49 PM, Gary Robinson wrote: >>> I wonder if it would be worthwhile to have another poll, this time >>> clearly differentiating between >>> >>> a) focusing on integrating the existing numpy in such a way that >>> scipy and other such packages are also enabled, probably using >>> the existing project to provide a C interface that IronPython and >>> other Python variants can use; or >>> >>> b) the current path of replacing much of numpy, making it much >>> faster but leaving scipy out in the cold for quite some time. >>> >>> I don't think it's clear, at this point, which approach would generate >>> more monetary contributions. I suspect it might be (a) because of >>> commercial scientific research that depends on scipy. Of course, >>> if the path decision is already firm, then such a poll would be moot. >>> >> >> It's however clear which approach is harder and more painful. I for >> one don't subscribe for emulating all the subtleties of CPython C API >> nor numpy API. > > If you do that, you are not porting numpy, and the current code-name > of micronumpy is quite appropriate ;) > > I would be much more interested in (a), since as I understand it > (b) would only cater to libraries using just numpy's python interface > (and even there PyPy still has a lot of work to do). Is any pypy dev interested in (a)? I really don't think there is, so we might continue discussing the road to take. For games and for a lot of matrix processing (AI, encription) numpy is good enough. Matplotlib could be easily ported to pypy, just by dropping some features (AGG and part of the freetype support code). FFT, blas and a lot of pieces of scipy just need a pointer to data s? they should work (if there is a ctypes wrapper). Cython is being worked on so when that is ready parts of sage will just run, as everything else using cython. Why not move more of scipy to cython/ctypes? That is what you guys want for the future, and then it would not make anyone have to work on something they have no interest in. -- Leonardo Santagada From cournape at gmail.com Wed Oct 19 20:52:45 2011 From: cournape at gmail.com (David Cournapeau) Date: Wed, 19 Oct 2011 19:52:45 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> <4E9EB79A.7060901@gmail.com> <4E9EBB0D.6060805@gmail.com> <3010FB79-9EE1-42AA-BC69-39419B510CC0@me.com> <4E9EDE61.9030603@gmail.com> Message-ID: On Wed, Oct 19, 2011 at 6:47 PM, Leonardo Santagada wrote: > > Why not move more of scipy to cython/ctypes? That is what you guys > want for the future, and then it would not make anyone have to work on > something they have no interest in. Independently of pypy's direction w.r.t. numpy, this will happen. My ideal for numpy/scipy would be pure C/fortran for existing libraries with almost every python C API call done through cython. This is already happening for most packages in scipy ecosystem who do not have the long history of numpy/scipy. cheers, David From ian at ianozsvald.com Thu Oct 20 12:33:35 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Thu, 20 Oct 2011 11:33:35 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: > I was one of the people who responded to that poll, and I have to say that I fall into the category "they actually meant 'SciPy'?". I'll note with regards to the survey that I also recall saying Yes to numpy but never thinking to explain that I used SciPy, the SciKits and Cython for a lot of my work (not all of it but definitely for chunks of it). Maybe a second more focused survey would be useful? Ian. -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From fijall at gmail.com Thu Oct 20 12:41:11 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 20 Oct 2011 12:41:11 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: On Thu, Oct 20, 2011 at 12:33 PM, Ian Ozsvald wrote: >> I was one of the people who responded to that poll, and I have to say that I fall into the category "they actually meant 'SciPy'?". > > I'll note with regards to the survey that I also recall saying Yes to > numpy but never thinking to explain that I used SciPy, the SciKits and > Cython for a lot of my work (not all of it but definitely for chunks > of it). Maybe a second more focused survey would be useful? I think Armin made it clear enough but apparently not. We're not against scipy and we will try our best at supporting it. However it's not in the first part of the proposal - let's be reasonable, pypy is not magic, we can't make everything happen at the same time. We believe that emulating CPython C API is a lot of pain and numpy does not adhere to it anyway. We also see how cython is not the central part of numpy right now and it's unclear whether cython bindings would every be done as the basis of numpy array. How would you do that anyway? So providing a basic, working and preferably fast array type is an absolute necessity to go forward. We don't want to plan upfront what then. We also think providing the array type *has* to break backwards compatibility or it'll be a major pain to implement, simply because CPython is too different. And, as a value added, fast operations on low-level data *in python* while not a priority for a lot of scipy people is a priority for a lot of pypy people - it's just very useful. If you have a plan how to go forward *and* immediately get scipy, please speak up, I don't. Cheers, fijal > > Ian. > > -- > Ian Ozsvald (A.I. researcher) > ian at IanOzsvald.com > > http://IanOzsvald.com > http://MorConsulting.com/ > http://StrongSteam.com/ > http://SocialTiesApp.com/ > http://TheScreencastingHandbook.com > http://FivePoundApp.com/ > http://twitter.com/IanOzsvald > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From bulkcharter at vnn.vn Fri Oct 21 05:25:58 2011 From: bulkcharter at vnn.vn (bulkcharter at vnn.vn) Date: Fri, 21 Oct 2011 10:25:58 +0700 Subject: [pypy-dev] Document Message-ID: <4E3BC5D80014EDDA@ms05dat.vnn.vn> ----- The following is an automated response ----- to your message generated on behalf of bulkcharter at vnn.vn !!! NEW EMAIL ADDRESS NOTICE !!! Pls add fix at chartering.vn in to your mailing list and delete bulkcharter at vnn.vn & bulkcharter at gmail.com which were disconnected since 30 Apr, 2011 Pls cfm your receipt ----------------------------------------------------- GLC VIETNAM / Bulk Dry Chartering Email: fix at chartering.vn From alexander.pyattaev at tut.fi Fri Oct 21 15:53:29 2011 From: alexander.pyattaev at tut.fi (Alexander Pyattaev) Date: Fri, 21 Oct 2011 16:53:29 +0300 Subject: [pypy-dev] Success histories needed In-Reply-To: <4E9EC43F.5030807@changemaker.nu> References: <4E9EC43F.5030807@changemaker.nu> Message-ID: <3916825.yLpbnYj214@hunter-laptop.tontut.fi> I think i will finalize that thing on this weekend. Of course you will get permissions to do whatever you wish with it, I am not a microsoft fan=) On Wednesday 19 October 2011 14:36:15 Bea During wrote: > Hi there > > Maciej Fijalkowski skrev 2011-10-17 10:30: > > On Mon, Oct 17, 2011 at 10:17 AM, Alex Pyattaev wrote: > >> I have a fully-functional wireless network simulation tool written in > >> pypy+swig. Is that nice? Have couple papers to refer to as well. If > >> you want I could write a small abstract on how it was so great to use > >> pypy (which, in fact, was really great)? > > > > Please :) > > As Maciej says - please do ;-) > > And I think we may want to be able to publish your abstract on our blog > if that would be possible? > > Cheers > > Bea > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > > > -- > > > > CronLab scanned this message. We don't think it was spam. If it was, > > please report by copying this link into your browser: > > http://cronlab01.terratel.se/mail/index.php?id=A74652E2162.6D318-&learn > > =spam&host=212.91.134.155 > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From bea at changemaker.nu Fri Oct 21 16:36:10 2011 From: bea at changemaker.nu (Bea During) Date: Fri, 21 Oct 2011 16:36:10 +0200 Subject: [pypy-dev] Success histories needed In-Reply-To: <3916825.yLpbnYj214@hunter-laptop.tontut.fi> References: <4E9EC43F.5030807@changemaker.nu> <3916825.yLpbnYj214@hunter-laptop.tontut.fi> Message-ID: <4EA1835A.2050001@changemaker.nu> Hi there Alexander Pyattaev skrev 2011-10-21 15:53: > I think i will finalize that thing on this weekend. Of course you will get > permissions to do whatever you wish with it, I am not a microsoft fan=) Thanks! We look forward to reading it - and publishing it ;-) Cheers Bea > On Wednesday 19 October 2011 14:36:15 Bea During wrote: >> Hi there >> >> Maciej Fijalkowski skrev 2011-10-17 10:30: >>> On Mon, Oct 17, 2011 at 10:17 AM, Alex Pyattaev > wrote: >>>> I have a fully-functional wireless network simulation tool written in >>>> pypy+swig. Is that nice? Have couple papers to refer to as well. If >>>> you want I could write a small abstract on how it was so great to use >>>> pypy (which, in fact, was really great)? >>> Please :) >> As Maciej says - please do ;-) >> >> And I think we may want to be able to publish your abstract on our blog >> if that would be possible? >> >> Cheers >> >> Bea >> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >>> >>> >>> >>> -- >>> >>> CronLab scanned this message. We don't think it was spam. If it was, >>> please report by copying this link into your browser: >>> http://cronlab01.terratel.se/mail/index.php?id=A74652E2162.6D318-&learn >>> =spam&host=212.91.134.155 >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > > > -- > > CronLab scanned this message. We don't think it was spam. If it was, > please report by copying this link into your browser: http://cronlab01.terratel.se/mail/index.php?id=EBB6A2E21D4.94541-&learn=spam&host=212.91.134.155 > From ian at ianozsvald.com Sun Oct 23 18:24:10 2011 From: ian at ianozsvald.com (Ian Ozsvald) Date: Sun, 23 Oct 2011 17:24:10 +0100 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: I agree that numpy support is a good first aim, I hope it'll open the door to scipy support later. To that end I've made my donation. As discussed with Fijal via a private email I felt awkward with the new project (hence me asking the question 60 emails back) as I'd offered a ?600 donation which was made on the assumption that numpy+scipy support would be possible (and to be clear - this was entirely *my* assumption, made at EuroPython, before the project was defined - the error was mine). Obviously I want to see numpy supported, I do also want to see scipy (and probably cython) supported too. So, I've just donated $480USD (?300) for the numpy-pypy project as a personal donation. I'll make a second donation of $480 as and when a project is proposed that enables scipy support. This fits with my goals and I hope it helps the project move forwards. Cheers all, Ian. On 20 October 2011 11:41, Maciej Fijalkowski wrote: > On Thu, Oct 20, 2011 at 12:33 PM, Ian Ozsvald wrote: >>> I was one of the people who responded to that poll, and I have to say that I fall into the category "they actually meant 'SciPy'?". >> >> I'll note with regards to the survey that I also recall saying Yes to >> numpy but never thinking to explain that I used SciPy, the SciKits and >> Cython for a lot of my work (not all of it but definitely for chunks >> of it). Maybe a second more focused survey would be useful? > > I think Armin made it clear enough but apparently not. We're not > against scipy and we will try our best at supporting it. However it's > not in the first part of the proposal - let's be reasonable, pypy is > not magic, we can't make everything happen at the same time. > > We believe that emulating CPython C API is a lot of pain and numpy > does not adhere to it anyway. We also see how cython is not the > central part of numpy right now and it's unclear whether cython > bindings would every be done as the basis of numpy array. How would > you do that anyway? > > So providing a basic, working and preferably fast array type is an > absolute necessity to go forward. We don't want to plan upfront what > then. We also think providing the array type *has* to break backwards > compatibility or it'll be a major pain to implement, simply because > CPython is too different. And, as a value added, fast operations on > low-level data *in python* while not a priority for a lot of scipy > people is a priority for a lot of pypy people - it's just very useful. > > If you have a plan how to go forward *and* immediately get scipy, > please speak up, I don't. > > Cheers, > fijal > >> >> Ian. >> >> -- >> Ian Ozsvald (A.I. researcher) >> ian at IanOzsvald.com >> >> http://IanOzsvald.com >> http://MorConsulting.com/ >> http://StrongSteam.com/ >> http://SocialTiesApp.com/ >> http://TheScreencastingHandbook.com >> http://FivePoundApp.com/ >> http://twitter.com/IanOzsvald >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > -- Ian Ozsvald (A.I. researcher) ian at IanOzsvald.com http://IanOzsvald.com http://MorConsulting.com/ http://StrongSteam.com/ http://SocialTiesApp.com/ http://TheScreencastingHandbook.com http://FivePoundApp.com/ http://twitter.com/IanOzsvald From chris0wj at gmail.com Mon Oct 24 05:33:20 2011 From: chris0wj at gmail.com (Chris Wj) Date: Sun, 23 Oct 2011 23:33:20 -0400 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: Donation is in here too... numpy is just the beginning step in a great direction. Go pypy! On Sun, Oct 23, 2011 at 12:24 PM, Ian Ozsvald wrote: > I agree that numpy support is a good first aim, I hope it'll open the > door to scipy support later. > > To that end I've made my donation. As discussed with Fijal via a > private email I felt awkward with the new project (hence me asking the > question 60 emails back) as I'd offered a ?600 donation which was made > on the assumption that numpy+scipy support would be possible (and to > be clear - this was entirely *my* assumption, made at EuroPython, > before the project was defined - the error was mine). Obviously I > want to see numpy supported, I do also want to see scipy (and probably > cython) supported too. > > So, I've just donated $480USD (?300) for the numpy-pypy project as a > personal donation. I'll make a second donation of $480 as and when a > project is proposed that enables scipy support. This fits with my > goals and I hope it helps the project move forwards. > > Cheers all, > Ian. > > On 20 October 2011 11:41, Maciej Fijalkowski wrote: > > On Thu, Oct 20, 2011 at 12:33 PM, Ian Ozsvald > wrote: > >>> I was one of the people who responded to that poll, and I have to say > that I fall into the category "they actually meant 'SciPy'?". > >> > >> I'll note with regards to the survey that I also recall saying Yes to > >> numpy but never thinking to explain that I used SciPy, the SciKits and > >> Cython for a lot of my work (not all of it but definitely for chunks > >> of it). Maybe a second more focused survey would be useful? > > > > I think Armin made it clear enough but apparently not. We're not > > against scipy and we will try our best at supporting it. However it's > > not in the first part of the proposal - let's be reasonable, pypy is > > not magic, we can't make everything happen at the same time. > > > > We believe that emulating CPython C API is a lot of pain and numpy > > does not adhere to it anyway. We also see how cython is not the > > central part of numpy right now and it's unclear whether cython > > bindings would every be done as the basis of numpy array. How would > > you do that anyway? > > > > So providing a basic, working and preferably fast array type is an > > absolute necessity to go forward. We don't want to plan upfront what > > then. We also think providing the array type *has* to break backwards > > compatibility or it'll be a major pain to implement, simply because > > CPython is too different. And, as a value added, fast operations on > > low-level data *in python* while not a priority for a lot of scipy > > people is a priority for a lot of pypy people - it's just very useful. > > > > If you have a plan how to go forward *and* immediately get scipy, > > please speak up, I don't. > > > > Cheers, > > fijal > > > >> > >> Ian. > >> > >> -- > >> Ian Ozsvald (A.I. researcher) > >> ian at IanOzsvald.com > >> > >> http://IanOzsvald.com > >> http://MorConsulting.com/ > >> http://StrongSteam.com/ > >> http://SocialTiesApp.com/ > >> http://TheScreencastingHandbook.com > >> http://FivePoundApp.com/ > >> http://twitter.com/IanOzsvald > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> http://mail.python.org/mailman/listinfo/pypy-dev > >> > > > > > > -- > Ian Ozsvald (A.I. researcher) > ian at IanOzsvald.com > > http://IanOzsvald.com > http://MorConsulting.com/ > http://StrongSteam.com/ > http://SocialTiesApp.com/ > http://TheScreencastingHandbook.com > http://FivePoundApp.com/ > http://twitter.com/IanOzsvald > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.livni at gmail.com Mon Oct 24 09:27:49 2011 From: jonathan.livni at gmail.com (Jonathan Livni) Date: Mon, 24 Oct 2011 09:27:49 +0200 Subject: [pypy-dev] Maintaining an exhaustive list of supported and non-supported packages Message-ID: Would it be possible to maintain a user-generated exhaustive list of supported and non-supported packages? I believe this would help people considering the use of PyPy in their decision to try it out. For example - I'm working on a project with the following dependencies: Python 2.7, django (with sqlite), pytz, lxml, pywin32 Sure, I'll get around to actually testing PyPy with these dependencies in a month or two, but if a supported vs. non-supported list would be available, that would get me going much faster. Also, I think such lists would motivate package maintainers to contribute to PyPy so that it would support their package, but that's just a guess. Regards, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Mon Oct 24 09:33:01 2011 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Mon, 24 Oct 2011 18:33:01 +1100 Subject: [pypy-dev] Maintaining an exhaustive list of supported and non-supported packages In-Reply-To: References: Message-ID: On 24 October 2011 18:27, Jonathan Livni wrote: > Would it be possible to maintain a user-generated exhaustive list of > supported and non-supported packages? Not only would it be possible, it is currently done! https://bitbucket.org/pypy/compatibility/wiki/Home We should probably link here from the 'compatability' page on the user-facing website. -- William Leslie From jonathan.livni at gmail.com Mon Oct 24 09:43:44 2011 From: jonathan.livni at gmail.com (Jonathan Livni) Date: Mon, 24 Oct 2011 09:43:44 +0200 Subject: [pypy-dev] Maintaining an exhaustive list of supported and non-supported packages In-Reply-To: References: Message-ID: Excellent! William - you're right, it would be good to have a link from the compatibility page, that's the first place I looked before starting this thread :) On Mon, Oct 24, 2011 at 9:33 AM, William ML Leslie < william.leslie.ttg at gmail.com> wrote: > On 24 October 2011 18:27, Jonathan Livni wrote: > > Would it be possible to maintain a user-generated exhaustive list of > > supported and non-supported packages? > > Not only would it be possible, it is currently done! > > https://bitbucket.org/pypy/compatibility/wiki/Home > > We should probably link here from the 'compatability' page on the > user-facing website. > > -- > William Leslie > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Mon Oct 24 09:50:54 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 24 Oct 2011 09:50:54 +0200 Subject: [pypy-dev] Maintaining an exhaustive list of supported and non-supported packages In-Reply-To: References: Message-ID: 2011/10/24 Jonathan Livni > Would it be possible to maintain a user-generated exhaustive list of > supported and non-supported packages? > I believe this would help people considering the use of PyPy in their > decision to try it out. > For example - I'm working on a project with the following dependencies: > Python 2.7, django (with sqlite), pytz, lxml, pywin32 > Sure, I'll get around to actually testing PyPy with these dependencies in a > month or two, but if a supported vs. non-supported list would be available, > that would get me going much faster. > Also, I think such lists would motivate package maintainers to contribute > to PyPy so that it would support their package, but that's just a guess. > There is already the "PyPy compatibility" wiki, where your input is welcome: https://bitbucket.org/pypy/compatibility/wiki/Home About the packages you mentioned: django and sqlite are known to work, lxml does not work at the moment because it relies on too many unsupported C API, and I managed to make pywin32 work correctly, but only with a patch that I haven't yet submitted to the pywin32 developers. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Oct 24 13:39:20 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 24 Oct 2011 13:39:20 +0200 Subject: [pypy-dev] Questions on the pypy+numpy project In-Reply-To: References: <891C6B41-044B-4F3C-A5AC-D9EC82EF43CD@me.com> Message-ID: On Sun, Oct 23, 2011 at 6:24 PM, Ian Ozsvald wrote: > I agree that numpy support is a good first aim, I hope it'll open the > door to scipy support later. > > To that end I've made my donation. As discussed with Fijal via a > private email I felt awkward with the new project (hence me asking the > question 60 emails back) as I'd offered a ?600 donation which was made > on the assumption that numpy+scipy support would be possible (and to > be clear - this was entirely *my* assumption, made at EuroPython, > before the project was defined - the ?error was mine). Obviously I > want to see numpy supported, I do also want to see scipy (and probably > cython) supported too. > > So, I've just donated $480USD (?300) for the numpy-pypy project as a > personal donation. I'll make a second donation of $480 as and when a > project is proposed that enables scipy support. This fits with my > goals and I hope it helps the project move forwards. > > Cheers all, > Ian. Thanks Ian for the donation! It makes perfect sense. > > On 20 October 2011 11:41, Maciej Fijalkowski wrote: >> On Thu, Oct 20, 2011 at 12:33 PM, Ian Ozsvald wrote: >>>> I was one of the people who responded to that poll, and I have to say that I fall into the category "they actually meant 'SciPy'?". >>> >>> I'll note with regards to the survey that I also recall saying Yes to >>> numpy but never thinking to explain that I used SciPy, the SciKits and >>> Cython for a lot of my work (not all of it but definitely for chunks >>> of it). Maybe a second more focused survey would be useful? >> >> I think Armin made it clear enough but apparently not. We're not >> against scipy and we will try our best at supporting it. However it's >> not in the first part of the proposal - let's be reasonable, pypy is >> not magic, we can't make everything happen at the same time. >> >> We believe that emulating CPython C API is a lot of pain and numpy >> does not adhere to it anyway. We also see how cython is not the >> central part of numpy right now and it's unclear whether cython >> bindings would every be done as the basis of numpy array. How would >> you do that anyway? >> >> So providing a basic, working and preferably fast array type is an >> absolute necessity to go forward. We don't want to plan upfront what >> then. We also think providing the array type *has* to break backwards >> compatibility or it'll be a major pain to implement, simply because >> CPython is too different. And, as a value added, fast operations on >> low-level data *in python* while not a priority for a lot of scipy >> people is a priority for a lot of pypy people - it's just very useful. >> >> If you have a plan how to go forward *and* immediately get scipy, >> please speak up, I don't. >> >> Cheers, >> fijal >> >>> >>> Ian. >>> >>> -- >>> Ian Ozsvald (A.I. researcher) >>> ian at IanOzsvald.com >>> >>> http://IanOzsvald.com >>> http://MorConsulting.com/ >>> http://StrongSteam.com/ >>> http://SocialTiesApp.com/ >>> http://TheScreencastingHandbook.com >>> http://FivePoundApp.com/ >>> http://twitter.com/IanOzsvald >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >>> >> > > > > -- > Ian Ozsvald (A.I. researcher) > ian at IanOzsvald.com > > http://IanOzsvald.com > http://MorConsulting.com/ > http://StrongSteam.com/ > http://SocialTiesApp.com/ > http://TheScreencastingHandbook.com > http://FivePoundApp.com/ > http://twitter.com/IanOzsvald > From fijall at gmail.com Tue Oct 25 10:03:16 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 25 Oct 2011 10:03:16 +0200 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: References: <4E9D65C0.5090607@gmx.de> <4E9D890B.4060609@gmail.com> Message-ID: On Tue, Oct 18, 2011 at 6:09 PM, Maciej Fijalkowski wrote: > On Tue, Oct 18, 2011 at 4:11 PM, Antonio Cuni wrote: >> On 18/10/11 13:40, Carl Friedrich Bolz wrote: >>> >>> Hi all, >>> >>> Now that we are getting in some money for our Py3k [1] and Numpy [2] >>> funding proposals (thank you very very much, for everybody who >>> contributed!) it is time to think more concretely about the actual >>> execution. >>> >>> Therefore I want to ask for PyPy developers that are interested in >>> getting paid for their work on the first steps of the Numpy or Py3k >>> proposals to step forward. To be applicable you need to be an >>> experienced PyPy developer who worked in this area before (Numpy) or on >>> the Python interpreter (Py3k). >> >> I'd like to be considered to help implementing the Py3k proposal. >> >> ciao, >> Anto >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > I would like to be considered for both (with the slight preference for numpy). > I meant I would like to work on both py3k and numpy proposal and if there is a choice I would prefer numpy. Cheers, fijal From Ronny.Pfannschmidt at gmx.de Wed Oct 26 10:39:42 2011 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 26 Oct 2011 10:39:42 +0200 Subject: [pypy-dev] Developer selection for Py3k and Numpy In-Reply-To: <4E9D65C0.5090607@gmx.de> References: <4E9D65C0.5090607@gmx.de> Message-ID: <4EA7C74E.9090304@gmx.de> On 10/18/2011 01:40 PM, Carl Friedrich Bolz wrote: > Hi all, > > Now that we are getting in some money for our Py3k [1] and Numpy [2] > funding proposals (thank you very very much, for everybody who > contributed!) it is time to think more concretely about the actual > execution. > > Therefore I want to ask for PyPy developers that are interested in > getting paid for their work on the first steps of the Numpy or Py3k > proposals to step forward. To be applicable you need to be an > experienced PyPy developer who worked in this area before (Numpy) or on > the Python interpreter (Py3k). > > Based on these answers we will then select and announce developers to > work on the proposals. The work will be started at the upcoming > Gothenburg sprint. > > Cheers, > > Carl Friedrich consider me for py3k as well, pyrepl will be some effort to get reasonable -- Ronny > > > [1] http://pypy.org/numpydonate.html > [2] http://pypy.org/py3donate.html > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From mark.pearse at skynet.be Thu Oct 27 21:20:35 2011 From: mark.pearse at skynet.be (Mark Pearse) Date: Thu, 27 Oct 2011 21:20:35 +0200 Subject: [pypy-dev] Goteborg sprint - newcomer Message-ID: Hi, I would like to come to the next sprint in Goteborg. I have been following the Pypy project with interest since its inception, but this is this first time I have been able to arrange free time to come and participate. I do not have any particular focus for my interest and would be happy to work on whatever people suggest. My degree was in Mathematics, but I have worked most of my life as a programmer. I did an occam compiler for the Manchester dataflow machine as an MSc project. I heard about Pypy while researching for my own (now dead) project to do an opensource Prograph implementation. I will be making my travel arrangements tomorrow. At the moment I plan to leave Brussels by train on the 1st, probably arriving Goteborg early afternoon on the 2nd, and to travel back on the 10th. I shall be looking for cheap accommodation and will try to book the Veckobostader as suggested in the blog. I would be happy to share a room if anyone wants to. Any comments or suggestions about travel or accommodation would be gratefully received. Best regards to all, Mark Pearse -------------- next part -------------- An HTML attachment was scrubbed... URL: From uncensored at theendtimemessage.com Thu Oct 27 21:48:50 2011 From: uncensored at theendtimemessage.com (The End Time Message) Date: Thu, 27 Oct 2011 20:48:50 +0100 Subject: [pypy-dev] [SUSPECTED SPAM] GOD SAID HE IS NO LONGER SLEEPING ROUGH ON THE STREETS WITH THE HOMELESS Message-ID: http://theendtimemessage.blogspot.com GOD said HE is not sleeping on the streets any more with the homeless, HE is using a shelter, it was too cold. GOD said don't stop doing what you are doing for the homeless, you are helping HIM by helping them. GOD said HE would like to commend everyone for what they are doing for the homeless. GOD said HE would like to thank David Cameron again for the warm, thick socks. GOD SAID HE IS SLEEPING ON THE STREETS TONIGHT WITH THE HOMELESS GOD said I should remind people to wash the blankets first before they give them to the homeless, so that it is given with a good heart. GOD said if you have spare sleeping bags, sleeping bags are better and a small pillow. GOD said some of them are hungry. If you have a flask, put a hot drink in it, HE would appreciate it. GOD said HE is sleeping out on the streets with the homeless. GOD said if you have any old warm jumpers and spare woolly socks, bring them with you, it's cold! GOD said don't take the food or clothes to the shelters, they do not need them. GOD said HE will protect you, come in twos if you are afraid, the people who need it are underneath the railway arches and in the doorways of buildings in the cities and on the streets. I wrote this song in 1994 after giving MY life to Jesus. In 1983 I was homeless for about 3 months. One night I was followed in the early hours of the morning while I was walking the streets after sleeping on people's doorsteps and I didn't feel safe in the big block of flats I was in and I was nearly murdered that night. I was almost 9 months pregnant and I was only 16 years old. Take a copy of the song and tell the homeless I love them and I'm sorry that I am in this house and they are on the streets. GOD said if you have an empty building that you are not using, you can open it up and offer it to the homeless for the winter, because it is cold. This song is for the homeless. I AM giving people permission to use this song. Any money made from this song, all the proceeds must go to the homeless and I don't want the money to be given to a charitable organisation, I want the money handed out on the streets to the individuals who are homeless, worldwide. I trust people are not going to be selfish. COBBLE AND STONE I was cold and I was hungry Cobble and stone made my bed I was crying and I was weeping And you offered me your abode When all I needed was a friend Someone to talk with me now and then It could have all worked out different when Inspiration, you brought me salvation There were cries of desperation I had nothing left to lose I was afraid, the isolation But what else could I do? When all I needed was a friend Someone to talk with me now and then It could have all worked out different when Inspiration, you brought me salvation When all I held was on the line And my troubled soul grew weary When I felt numb inside And nothing more could hurt me Thats when you came along Right on time. Lyrics and melody written by Lorraine Recasus http://theendtimemessage.blogspot.com The End Time Message -------------- next part -------------- An HTML attachment was scrubbed... URL: From techtonik at gmail.com Fri Oct 28 09:40:12 2011 From: techtonik at gmail.com (anatoly techtonik) Date: Fri, 28 Oct 2011 10:40:12 +0300 Subject: [pypy-dev] Google Groups Mirror Message-ID: It is not very convenient to read PyPy archives and track selected threads. Is it possible to provide a groups mirror (even read-only will suffice)? -- anatoly t. From fijall at gmail.com Fri Oct 28 09:42:18 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 28 Oct 2011 09:42:18 +0200 Subject: [pypy-dev] Google Groups Mirror In-Reply-To: References: Message-ID: On Fri, Oct 28, 2011 at 9:40 AM, anatoly techtonik wrote: > It is not very convenient to read PyPy archives and track selected > threads. Is it possible to provide a groups mirror (even read-only > will suffice)? I think gmane has a mirror. Is it good enough? From techtonik at gmail.com Fri Oct 28 11:10:19 2011 From: techtonik at gmail.com (anatoly techtonik) Date: Fri, 28 Oct 2011 12:10:19 +0300 Subject: [pypy-dev] Google Groups Mirror In-Reply-To: References: Message-ID: On Fri, Oct 28, 2011 at 10:42 AM, Maciej Fijalkowski wrote: > On Fri, Oct 28, 2011 at 9:40 AM, anatoly techtonik wrote: >> It is not very convenient to read PyPy archives and track selected >> threads. Is it possible to provide a groups mirror (even read-only >> will suffice)? > > I think gmane has a mirror. Is it good enough? No. 1. It doesn't pop up in search results - http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=pypy+list+archive 2. I can't find how to subscribe to single threads http://news.gmane.org/gmane.comp.python.pypy -- anatoly t. From santagada at gmail.com Fri Oct 28 15:09:37 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Fri, 28 Oct 2011 11:09:37 -0200 Subject: [pypy-dev] Google Groups Mirror In-Reply-To: References: Message-ID: On Fri, Oct 28, 2011 at 5:40 AM, anatoly techtonik wrote: > It is not very convenient to read PyPy archives and track selected > threads. Is it possible to provide a groups mirror (even read-only > will suffice)? We did this for the python-brasil mailing list, the only thing you need is an archive with all the messages, I can try to find the script we used if someone wants to send the messages. -- Leonardo Santagada From ram at rachum.com Sat Oct 29 01:24:28 2011 From: ram at rachum.com (Ram Rachum) Date: Sat, 29 Oct 2011 01:24:28 +0200 Subject: [pypy-dev] Where is `float.is_integer`? Message-ID: Guys, am I missing something? ----------------------------- *Python 2.7.2* (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32Type "help", "copyright", "credits" or "license" for more information. >>> x = 4.5 >>> x.is_integer() False ----------------------------- Python 2.7.1 (080f42d5c4b4, Aug 23 2011, 11:41:11) [*PyPy 1.6.0* with MSC v.1500 32 bit] on win32 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``- It's hard to say exactly what constitutes research in the computer world, but as a first approximation, it's software that doesn't have users.'' >>>> x = 4.5 >>>> x.is_integer() Traceback (most recent call last): File "", line 1, in AttributeError: 'float' object has no attribute 'is_integer' ----------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Sat Oct 29 01:27:04 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 28 Oct 2011 19:27:04 -0400 Subject: [pypy-dev] Where is `float.is_integer`? In-Reply-To: References: Message-ID: On Fri, Oct 28, 2011 at 7:24 PM, Ram Rachum wrote: > Guys, am I missing something? > > ----------------------------- > > *Python 2.7.2* (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit > (Intel)] on win32Type "help", "copyright", "credits" or "license" for more > information. > >>> x = 4.5 > >>> x.is_integer() > False > > ----------------------------- > > Python 2.7.1 (080f42d5c4b4, Aug 23 2011, 11:41:11) > [*PyPy 1.6.0* with MSC v.1500 32 bit] on win32 > Type "help", "copyright", "credits" or "license" for more information. > And now for something completely different: ``- It's hard to say exactly > what > constitutes research in the computer world, but as a first approximation, > it's > software that doesn't have users.'' > >>>> x = 4.5 > >>>> x.is_integer() > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'float' object has no attribute 'is_integer' > > ----------------------------- > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > Nope, it appears we're missing. In our defense, CPython has *zero* tests for it, so I'd just as soon assume it doesn't work at all... Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From ram at rachum.com Sat Oct 29 03:03:51 2011 From: ram at rachum.com (Ram Rachum) Date: Sat, 29 Oct 2011 03:03:51 +0200 Subject: [pypy-dev] Where is `float.is_integer`? In-Reply-To: References: Message-ID: On Sat, Oct 29, 2011 at 1:27 AM, Alex Gaynor wrote: > > > On Fri, Oct 28, 2011 at 7:24 PM, Ram Rachum wrote: > >> Guys, am I missing something? >> >> ----------------------------- >> >> *Python 2.7.2* (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit >> (Intel)] on win32Type "help", "copyright", "credits" or "license" for more >> information. >> >>> x = 4.5 >> >>> x.is_integer() >> False >> >> ----------------------------- >> >> Python 2.7.1 (080f42d5c4b4, Aug 23 2011, 11:41:11) >> [*PyPy 1.6.0* with MSC v.1500 32 bit] on win32 >> Type "help", "copyright", "credits" or "license" for more information. >> And now for something completely different: ``- It's hard to say exactly >> what >> constitutes research in the computer world, but as a first approximation, >> it's >> software that doesn't have users.'' >> >>>> x = 4.5 >> >>>> x.is_integer() >> Traceback (most recent call last): >> File "", line 1, in >> AttributeError: 'float' object has no attribute 'is_integer' >> >> ----------------------------- >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> > Nope, it appears we're missing. In our defense, CPython has *zero* tests > for it, so I'd just as soon assume it doesn't work at all... > > Alex > > I guess a script could be made that would go over *all* the classes in CPython, see all their methods, compare to those of PyPy, and point out which ones we forgot. Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wickedgrey at gmail.com Sat Oct 29 03:06:51 2011 From: wickedgrey at gmail.com (Eli Stevens (Gmail)) Date: Fri, 28 Oct 2011 18:06:51 -0700 Subject: [pypy-dev] Where is `float.is_integer`? In-Reply-To: References: Message-ID: Isn't the solution to submit a bug to CPython noting the hole in their test suite? Eli From exarkun at twistedmatrix.com Sat Oct 29 03:13:55 2011 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Sat, 29 Oct 2011 01:13:55 -0000 Subject: [pypy-dev] Where is `float.is_integer`? In-Reply-To: References: Message-ID: <20111029011355.23178.10270406.divmod.xquotient.753@localhost.localdomain> On 01:03 am, ram at rachum.com wrote: >> >I guess a script could be made that would go over *all* the classes in >CPython, see all their methods, compare to those of PyPy, and point out >which ones we forgot. One could also examine a test coverage report from CPython and add unit tests for all of the uncovered functionality. ;) Jean-Paul From fijall at gmail.com Sat Oct 29 12:04:13 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 29 Oct 2011 12:04:13 +0200 Subject: [pypy-dev] PyPy's sandbox used as a class help Message-ID: I thought people might be interested in that: http://pdos.csail.mit.edu/6.858/2011/labs/lab4.html From matti.picus at gmail.com Sun Oct 30 01:06:26 2011 From: matti.picus at gmail.com (matti picus) Date: Sun, 30 Oct 2011 01:06:26 +0200 Subject: [pypy-dev] numpy and multi-dimensional arrays Message-ID: So it turns out there are two branches for multi dimensional arrays. I just committed get/set for ndim slices on the branch "numpy NDimArray" , and it turns out I am duplicating work done by someone else on the numpy-multidim branch :( . Could that developer please contact me directly so we can coordinate and me things forward faster without the duplication? BTW - I took the route that left the SingleDimArray intact, if someone would care to write a speed test we could check the comparitive performance between a=numpy.zeros(1000000); a+a #SingleDimArray and a=numpy.zeros((1000000,)); a+a #NDimArray Matti -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Oct 30 08:29:17 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 30 Oct 2011 08:29:17 +0100 Subject: [pypy-dev] numpy and multi-dimensional arrays In-Reply-To: References: Message-ID: On Sun, Oct 30, 2011 at 1:06 AM, matti picus wrote: > So it turns out there are two branches for multi dimensional arrays. I just > committed get/set for ndim slices on the branch "numpy NDimArray" , and it > turns out I am duplicating work done by someone else on the numpy-multidim > branch :( . Could that developer please contact me directly so we can > coordinate and me things forward faster without the duplication? > > BTW - I took the route that left the SingleDimArray intact, if someone would > care to write a speed test we could check the comparitive performance > between > a=numpy.zeros(1000000); a+a #SingleDimArray > and > a=numpy.zeros((1000000,)); a+a #NDimArray > > > Matti > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > Hey I looked at your branch and wanted to start from there, but it came with very little tests, so I started my own. PS. There is I think a third one that we can close ;-) Cheers, fijal From fijall at gmail.com Sun Oct 30 08:30:20 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 30 Oct 2011 08:30:20 +0100 Subject: [pypy-dev] numpy and multi-dimensional arrays In-Reply-To: References: Message-ID: On Sun, Oct 30, 2011 at 8:29 AM, Maciej Fijalkowski wrote: > On Sun, Oct 30, 2011 at 1:06 AM, matti picus wrote: >> So it turns out there are two branches for multi dimensional arrays. I just >> committed get/set for ndim slices on the branch "numpy NDimArray" , and it >> turns out I am duplicating work done by someone else on the numpy-multidim >> branch :( . Could that developer please contact me directly so we can >> coordinate and me things forward faster without the duplication? >> >> BTW - I took the route that left the SingleDimArray intact, if someone would >> care to write a speed test we could check the comparitive performance >> between >> a=numpy.zeros(1000000); a+a #SingleDimArray >> and >> a=numpy.zeros((1000000,)); a+a #NDimArray >> >> >> Matti >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> > > Hey > > I looked at your branch and wanted to start from there, but it came > with very little tests, so I started my own. > > PS. There is I think a third one that we can close ;-) > > Cheers, > fijal > For example 827d5d5d5218 came with no tests whatsoever. From arigo at tunes.org Sun Oct 30 14:40:19 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 30 Oct 2011 14:40:19 +0100 Subject: [pypy-dev] Goteborg sprint - newcomer In-Reply-To: References: Message-ID: Hi Mark, Welcome :-) > I shall be looking for cheap accommodation and will try to book > the Veckobostader as suggested in the blog. Ok. Sorry, it seems that we are 3 people there in total, so we cannot offer you a shared room. A bient?t, Armin. From hakan at debian.org Sun Oct 30 18:47:22 2011 From: hakan at debian.org (Hakan Ardo) Date: Sun, 30 Oct 2011 18:47:22 +0100 Subject: [pypy-dev] Goteborg sprint - newcomer In-Reply-To: References: Message-ID: Hi Mark, I've not yet arranged accommodations for the sprint, and would be happy to share a room at Veckobostader. However I will only be there 3/11 - 8/11. On Thu, Oct 27, 2011 at 9:20 PM, Mark Pearse wrote: > Hi, > I would like to come to the next sprint in Goteborg.? I have been following > the Pypy project with interest since its inception, but this is this first > time I have been able to arrange free time to come and participate.? I do > not have any particular focus for my interest and would be happy to work on > whatever people suggest. > My degree was in Mathematics, but I have worked most of my life as a > programmer.? I did an occam compiler for the Manchester dataflow machine as > an MSc project.? I heard about Pypy while researching for my own (now dead) > project to do an opensource Prograph implementation. > I will be making my travel arrangements tomorrow.? At the moment I plan to > leave Brussels by train on the 1st, probably arriving Goteborg early > afternoon on the 2nd, and to travel back on the 10th.? I shall be looking > for cheap accommodation and will try to book the Veckobostader as suggested > in the blog.? I would be happy to share a room if anyone wants to. > Any comments or suggestions about travel or accommodation would be > gratefully received. > Best regards to all, > Mark Pearse > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- H?kan Ard? From alex.pyattaev at gmail.com Mon Oct 31 23:47:38 2011 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Tue, 01 Nov 2011 00:47:38 +0200 Subject: [pypy-dev] Success histories needed In-Reply-To: <4EA1835A.2050001@changemaker.nu> References: <3916825.yLpbnYj214@hunter-laptop.tontut.fi> <4EA1835A.2050001@changemaker.nu> Message-ID: <2713596.Kfjg648WAV@hunter-laptop.tontut.fi> Well... that took a bit long-ish because I was preparing a small conference we are having here in Tampere. Anyway, as promised. You are free to do whatever you like with the stuff (cut/remove stuff if it is too big) as long as there is my name somewhere nearby. PYPY + SWIG as a basis for simulation platform System-level simulations of communications networks are very resource-hungry, mostly due to complex statistical relations that have to be uncovered. For example, it is not enough to observe just one or two packets in the network to conclude if it is stable or not, the actual observation should last a lot longer then any feedback process in the network. If the buffer sizes are large, this can mean hundreds of thousands of packets. Wireless networks add one extra problem - the channel model is typically an $N^2$ complexity algorithm, and it has to be run for each and every symbol transmitted over the radio interface. Unfortunately, most of the existing simulation platforms for wireless networks do not allow for multiple sources to coexist and cooperate, and therefore are unsuitable for my research. A new simulation environment was to be devised. Originally, my simulator was implemented as a time-advance model with Matlab. Unfortunately, complex logic of the network in question made it so slow, that it was practically useless. The second implementation in C was scalable enough for a single wireless cell, but it was clearly unsuitable for implementation of complex scheduling algorithms. Generally, scheduling algorithms operate with such terms as sets, maps and dynamic priority queues. The relay selection algorithms, which are the prime topic, are also operating with sets and mappings. Unfortunately, such high- level data structures do not fit well into the paradigm of C/C++. Therefore, it was decided to split the model into two parts: the link-level model implemented with C and the control level implemented with Python. Obviously, the bulk of calculations was to be performed at the link-level, and the complex control logic was to be implemented with Python. Unfortunately, the performance of Python interpreter soon appeared to be a bottleneck, but redesigning the model again did not seem like a right thing to do. And here, following the Zen of Python, the one obvious solution popped out - PyPy. PyPy is essentially a JIT-compiler for Python, and it allows to run Python programs with speeds close to those approachable with pure C++. At the same time it retains the flexibility of the scripting language. As for the C part, the SWIG wrapper generator allows to maintain the linking between the python top-level and binary backend in a clean, efficient and cross-platform manner. It also allows for a nice trick - the wrappers can be also generated for Octave, which allows to verify the channel model parts against their analytical models in Matlab-like environment. Unfortunately, SWIG can not generate interfaces for Matlab yet, but in most cases I have actually found Octave better and faster for small prototyping and testing projects. Currently, the PyPy 1.6 runs the model about 200\% faster then standard CPython 2.7.2. Also it provides the unique opportunities in configuration, as most of the model scenario can be constructed with object constructors and for loops, rather then some freaky config files or infinite bulks of XML. Overall, I am very statisfied with PyPy experience, and I plan on trying it in even more demanding environment - as an experimental engine for a P2P protocol tracker. Best regards, Alexander Pyattaev, Researcher, Department of Communications Engineering, Tampere University of Technology, Finland On Friday 21 October 2011 16:36:10 Bea During wrote: > Hi there > > Alexander Pyattaev skrev 2011-10-21 15:53: > > I think i will finalize that thing on this weekend. Of course you will > > get permissions to do whatever you wish with it, I am not a microsoft > > fan=) > Thanks! We look forward to reading it - and publishing it ;-) > > Cheers > > Bea > > > On Wednesday 19 October 2011 14:36:15 Bea During wrote: > >> Hi there > >> > >> Maciej Fijalkowski skrev 2011-10-17 10:30: > >>> On Mon, Oct 17, 2011 at 10:17 AM, Alex > >>> Pyattaev> > > wrote: > >>>> I have a fully-functional wireless network simulation tool written > >>>> in > >>>> pypy+swig. Is that nice? Have couple papers to refer to as well. > >>>> If > >>>> you want I could write a small abstract on how it was so great to > >>>> use > >>>> pypy (which, in fact, was really great)? > >>> > >>> Please :) > >> > >> As Maciej says - please do ;-) > >> > >> And I think we may want to be able to publish your abstract on our > >> blog > >> if that would be possible? > >> > >> Cheers > >> > >> Bea > >> > >>> _______________________________________________ > >>> pypy-dev mailing list > >>> pypy-dev at python.org > >>> http://mail.python.org/mailman/listinfo/pypy-dev > >>> > >>> > >>> > >>> -- > >>> > >>> CronLab scanned this message. We don't think it was spam. If it was, > >>> please report by copying this link into your browser: > >>> http://cronlab01.terratel.se/mail/index.php?id=A74652E2162.6D318-&le > >>> arn > >>> =spam&host=212.91.134.155 > >> > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> http://mail.python.org/mailman/listinfo/pypy-dev > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > > > -- > > > > CronLab scanned this message. We don't think it was spam. If it was, > > please report by copying this link into your browser: > > http://cronlab01.terratel.se/mail/index.php?id=EBB6A2E21D4.94541-&learn > > =spam&host=212.91.134.155 > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev