From arigo at tunes.org Sun Apr 1 11:23:40 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 1 Apr 2012 11:23:40 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> Message-ID: Hi Andrew, hi all, On Wed, Mar 28, 2012 at 18:23, Andrew Francis wrote: >>Indeed, and it was around 2007, so I expect the authors to have been >>involved in completely different things for quite some time now... >>But I could try to contact them anyway. > > Communications is good :-) I'm also thinking about writing a short paper collecting things I said and think on various blog posts. A kind of "position paper". What do others think of this idea? > My PyPy knowledge is still?sketchy but I am changing that. ?I do understand > the Twisted reactor model > ?(thanks to my 2008 Pycon Talk) so I could follow discussions in that area. > Is this discussed on IRC? This is not discussed a lot right now. But it is apparently relatively easy to adapt the epoll-based Twisted reactor to use the 'transaction' module. (Again, this module is present in the stm-gc branch; look for lib_pypy/transaction.py for the interface, and pypy/module/transaction/* for the Python implementation on top of STM as exposed by RPython.) This 'transaction' module is also meant to be used directly, for example in this kind of Python code: for n in range(...): do_something(n) If each call to do_something() has "reasonable chances" to be independent from other calls, and if the order doesn't matter, then it can be rewritten as: for n in range(...): transaction.add(do_something, n) transaction.run() In addition, each transaction can add more transactions that will be run after it. So if you want to play with lib_pypy/stackless.py to add calls to 'transaction', feel free :-) Maybe it will show that a slightly different API is required from the 'transaction' module; I don't really know so far. A bient?t, Armin. From arigo at tunes.org Sun Apr 1 11:30:43 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 1 Apr 2012 11:30:43 +0200 Subject: [pypy-dev] Profiling pypy code In-Reply-To: References: Message-ID: Hi Timothy, On Wed, Mar 28, 2012 at 16:35, Timothy Baldridge wrote: > What should I look into for benchmarking actual times spent > in each loop? As a first approximation, getting only the execution counts is enough. Basically the work flow is: pick one of the most often executed loops, look into it, and be scared by the amount of cruft left --- calls to RPython functions, notably. Then try to remove them. Getting the execution times is not very useful at first, because the typical execution counts are not flat at all: often, the first few loops are run 100's or 1000's of times more often than all the others. At least it is so when looking at the interpreter running small examples. A bient?t, Armin. From arigo at tunes.org Sun Apr 1 11:46:40 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 1 Apr 2012 11:46:40 +0200 Subject: [pypy-dev] Speeding up zlib in standard library In-Reply-To: References: Message-ID: Hi Fijal, On Tue, Mar 27, 2012 at 15:30, Maciej Fijalkowski wrote: > This sounds overly specific to me. What do others think? It is indeed overly specific, but it may be useful nevertheless. We need to rephrase it to point out the specific-vs-general parts. Something like this: Q: How can I test and benchmark a modification to RPython? (As opposed to a modification done in the source code of the PyPy Python interpreter) A: As an example, let's say that you want to tweak pypy.rlib.rstring.StringBuilder. This file contains the implementation for tests only; the real translated implementation is in pypy.rpython.lltypesystem.rbuilder. This is tested by pypy.rpython.test.test_rbuilder. Be sure that any tweak you do still passes the existing tests, and if possible, add new tests specifically for your changes. (Run the tests with: python test_all.py rpython/test/test_rbuilder.py) Then to get benchmarks: based on the existing examples, create a new StringBuilder benchmark as the file pypy/translator/targetStringBuilder.py which will time the functionality --- written as a small RPython program --- and do this: $ cd pypy/translator/goal/ $ python translate.py targetStringBuilder.py $ ./targetStringBuilder-c You don't need to translate the full PyPy Python interpreter to benchmark every change. However, you should do it once at the end, to make sure that no corner case has been missed and that the performance improvements are visible there as well. --- Armin From arigo at tunes.org Sun Apr 1 11:53:54 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 1 Apr 2012 11:53:54 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> Message-ID: Hi Roberto, On 27.03.2012, Roberto De Ioris wrote: > Hi everyone, i have finally managed to have a pypy plugin into uWSGI via > libpypy-c. Great! Are you still working on it? We can accept patches; for us it would also be cool if you can work on a clone of the repository and issue "pull requests" on bitbucket.org. A bient?t, Armin. From arigo at tunes.org Sun Apr 1 12:31:31 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 1 Apr 2012 12:31:31 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Hi Stefan, Done in 623bcea85df3. Armin From fijall at gmail.com Sun Apr 1 13:05:04 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 1 Apr 2012 13:05:04 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> Message-ID: On Sun, Apr 1, 2012 at 11:23 AM, Armin Rigo wrote: > Hi Andrew, hi all, > > On Wed, Mar 28, 2012 at 18:23, Andrew Francis wrote: >>>Indeed, and it was around 2007, so I expect the authors to have been >>>involved in completely different things for quite some time now... >>>But I could try to contact them anyway. >> >> Communications is good :-) > > I'm also thinking about writing a short paper collecting things I said > and think on various blog posts. ?A kind of "position paper". ?What do > others think of this idea? I can help > >> My PyPy knowledge is still?sketchy but I am changing that. ?I do understand >> the Twisted reactor model >> ?(thanks to my 2008 Pycon Talk) so I could follow discussions in that area. >> Is this discussed on IRC? > > This is not discussed a lot right now. ?But it is apparently > relatively easy to adapt the epoll-based Twisted reactor to use the > 'transaction' module. ?(Again, this module is present in the stm-gc > branch; look for lib_pypy/transaction.py for the interface, and > pypy/module/transaction/* for the Python implementation on top of STM > as exposed by RPython.) ?This 'transaction' module is also meant to be > used directly, for example in this kind of Python code: > > ? ?for n in range(...): > ? ? ? ?do_something(n) > > If each call to do_something() has "reasonable chances" to be > independent from other calls, and if the order doesn't matter, then it > can be rewritten as: > > ? ?for n in range(...): > ? ? ? ?transaction.add(do_something, n) > ? ?transaction.run() > > In addition, each transaction can add more transactions that will be > run after it. ?So if you want to play with lib_pypy/stackless.py to > add calls to 'transaction', feel free :-) ?Maybe it will show that a > slightly different API is required from the 'transaction' module; I > don't really know so far. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From roberto at unbit.it Sun Apr 1 13:52:54 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Sun, 1 Apr 2012 13:52:54 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> Message-ID: <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> > Hi Roberto, > > On 27.03.2012, Roberto De Ioris wrote: >> Hi everyone, i have finally managed to have a pypy plugin into uWSGI via >> libpypy-c. > > Great! > > Are you still working on it? We can accept patches; for us it would > also be cool if you can work on a clone of the repository and issue > "pull requests" on bitbucket.org. > > Hi Armin, yes i am still working on it, i would like to have multithread support working before sending other patches. -- Roberto De Ioris http://unbit.it From lac at openend.se Sun Apr 1 14:02:15 2012 From: lac at openend.se (Laura Creighton) Date: Sun, 01 Apr 2012 14:02:15 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: Message from Armin Rigo of "Sun, 01 Apr 2012 11:23:40 +0200." References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> Message-ID: <201204011202.q31C2FkZ031055@theraft.openend.se> In a message of Sun, 01 Apr 2012 11:23:40 +0200, Armin Rigo writes: >I'm also thinking about writing a short paper collecting things I said >and think on various blog posts. A kind of "position paper". What do >others think of this idea? Good idea. Is this a 'scientific paper for a conference' sort of thing, or a 'post it on the internet' sort of thing? If the second, remember to include a 'Fund this reaseach HERE' button. :-) Laura From arigo at tunes.org Sun Apr 1 14:06:27 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 1 Apr 2012 14:06:27 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: <201204011202.q31C2FkZ031055@theraft.openend.se> References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <201204011202.q31C2FkZ031055@theraft.openend.se> Message-ID: Hi Laura, On Sun, Apr 1, 2012 at 14:02, Laura Creighton wrote: > Good idea. ?Is this a 'scientific paper for a conference' sort of thing, > or a 'post it on the internet' sort of thing? I had in mind a workshop/conference/journal, but of course it would also be available on the net. A bient?t, Armin. From stefan_ml at behnel.de Sun Apr 1 15:04:17 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 01 Apr 2012 15:04:17 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Armin Rigo, 01.04.2012 12:31: > Hi Stefan, > > Done in 623bcea85df3. Thanks, Armin! Would have taken me a while to figure these things out. I'll give it a try with the next nightly. Stefan From fijall at gmail.com Sun Apr 1 15:25:08 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 1 Apr 2012 15:25:08 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: On Sun, Apr 1, 2012 at 3:04 PM, Stefan Behnel wrote: > Armin Rigo, 01.04.2012 12:31: >> Hi Stefan, >> >> Done in 623bcea85df3. > > Thanks, Armin! > > Would have taken me a while to figure these things out. > > I'll give it a try with the next nightly. > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev You can create your own nightly by clicking "force build" on the buildbot. From fijall at gmail.com Sun Apr 1 15:42:23 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 1 Apr 2012 15:42:23 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: On Sun, Apr 1, 2012 at 3:25 PM, Maciej Fijalkowski wrote: > On Sun, Apr 1, 2012 at 3:04 PM, Stefan Behnel wrote: >> Armin Rigo, 01.04.2012 12:31: >>> Hi Stefan, >>> >>> Done in 623bcea85df3. >> >> Thanks, Armin! >> >> Would have taken me a while to figure these things out. >> >> I'll give it a try with the next nightly. >> >> Stefan >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > You can create your own nightly by clicking "force build" on the buildbot. Ah, maybe worth noting is that -jit are ones that create nightlies that you're interested in. I'll cancel the other ones. Cheers, fijal From stefan_ml at behnel.de Sun Apr 1 15:48:58 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 01 Apr 2012 15:48:58 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Maciej Fijalkowski, 01.04.2012 15:42: > On Sun, Apr 1, 2012 at 3:25 PM, Maciej Fijalkowski wrote: >> On Sun, Apr 1, 2012 at 3:04 PM, Stefan Behnel wrote: >>> Armin Rigo, 01.04.2012 12:31: >>>> Hi Stefan, >>>> >>>> Done in 623bcea85df3. >>> >>> Thanks, Armin! >>> >>> Would have taken me a while to figure these things out. >>> >>> I'll give it a try with the next nightly. >> >> You can create your own nightly by clicking "force build" on the buildbot. > > Ah, maybe worth noting is that -jit are ones that create nightlies > that you're interested in. I'll cancel the other ones. Right, it's not immediately obvious which build job triggers what kind of output. I'm currently using the nojit version because it hand-wavingly appeared more stable than the jit version so far, and I doubt that there's any benefit for us to test against a jit build. Specifically, I'm interested in this file: http://buildbot.pypy.org/nightly/trunk/pypy-c-nojit-latest-linux64.tar.bz2 Stefan From stefan_ml at behnel.de Sun Apr 1 15:51:26 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 01 Apr 2012 15:51:26 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: Stefan Behnel, 01.04.2012 15:04: > Armin Rigo, 01.04.2012 12:31: >> Hi Stefan, >> >> Done in 623bcea85df3. > > Thanks, Armin! > > Would have taken me a while to figure these things out. > > I'll give it a try with the next nightly. Hmm, looks broken: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/810/steps/translate/logs/stdio Stefan From fijall at gmail.com Sun Apr 1 15:56:18 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 1 Apr 2012 15:56:18 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: On Sun, Apr 1, 2012 at 3:48 PM, Stefan Behnel wrote: > Maciej Fijalkowski, 01.04.2012 15:42: >> On Sun, Apr 1, 2012 at 3:25 PM, Maciej Fijalkowski wrote: >>> On Sun, Apr 1, 2012 at 3:04 PM, Stefan Behnel wrote: >>>> Armin Rigo, 01.04.2012 12:31: >>>>> Hi Stefan, >>>>> >>>>> Done in 623bcea85df3. >>>> >>>> Thanks, Armin! >>>> >>>> Would have taken me a while to figure these things out. >>>> >>>> I'll give it a try with the next nightly. >>> >>> You can create your own nightly by clicking "force build" on the buildbot. >> >> Ah, maybe worth noting is that -jit are ones that create nightlies >> that you're interested in. I'll cancel the other ones. > > Right, it's not immediately obvious which build job triggers what kind of > output. I'm currently using the nojit version because it hand-wavingly > appeared more stable than the jit version so far, and I doubt that there's > any benefit for us to test against a jit build. Specifically, I'm > interested in this file: > > http://buildbot.pypy.org/nightly/trunk/pypy-c-nojit-latest-linux64.tar.bz2 > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev It's totally not :) Then it's the applevel one. I'll poke it. From fijall at gmail.com Sun Apr 1 15:57:07 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 1 Apr 2012 15:57:07 +0200 Subject: [pypy-dev] Bringing Cython and PyPy closer together In-Reply-To: References: Message-ID: On Sun, Apr 1, 2012 at 3:51 PM, Stefan Behnel wrote: > Stefan Behnel, 01.04.2012 15:04: >> Armin Rigo, 01.04.2012 12:31: >>> Hi Stefan, >>> >>> Done in 623bcea85df3. >> >> Thanks, Armin! >> >> Would have taken me a while to figure these things out. >> >> I'll give it a try with the next nightly. > > Hmm, looks broken: > > http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/810/steps/translate/logs/stdio > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Fixing fixing... From stefan_ml at behnel.de Mon Apr 2 10:35:33 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 02 Apr 2012 10:35:33 +0200 Subject: [pypy-dev] Findings in the Cython test suite Message-ID: Hi, the new exception handling functions enabled a lot more tests in Cython's test suite, with still lots and lots of failures. I created tickets for a couple of them where it's clear that the problem is in PyPy/cpyext. The test results are collected here: https://sage.math.washington.edu:8091/hudson/view/dev-scoder/job/cython-scoder-pypy-nightly/lastCompletedBuild/testReport/ Here's the general log (which I find easier to read in this case): https://sage.math.washington.edu:8091/hudson/view/dev-scoder/job/cython-scoder-pypy-nightly/89/consoleFull For C compiler errors ("CompileError" exceptions) you'll have to look in the log at the place where the test was executed, but those are getting few. The test results are also recorded in the log, at the end (which, given the amount of failures, starts at about one third through the log). Some failures are due to wrong expectations in test code, e.g. we sometimes use the fact that small integers are cached in CPython. Others fail due to different exception messages in the doctests. Both can be handled on Cython side, given that CPython's own error messages aren't exactly carved in stone either. There is one major problem that accounts for the bulk of the test failures, somehow related to frame handling. You can tell by the huge amount of long traceback sequences that run into StackOverflowErrors and equivalent RuntimeErrors. When you look closer (e.g. repeatedly searching the log for the "classkwonlyargs" test), you will notice that the traceback refers to more than one test, i.e. the next doctest execution somehow picks up the frame of a previous test and continues from it. Funny enough, the frame leaking tests that have run (and failed) before the current one appear *below* the current test in the stack trace. This makes it likely that this is due to the way exception stack frames are constructed in Cython. They are only instantiated when an exception is being propagated, and then registered using PyTraceBack_Here(). It looks similarly likely that this is a bug in cpyext as it being due to problematic code in Cython. In case it's PyPy's fault, maybe there's something like an off-by-one when cleaning up the traceback frames somewhere? Another minor thing I noticed, PyPy seems to check the object type of keyword argument names at the caller side, whereas CPython does it as part of the function argument unpacking. Since Cython also does it on the function side and therefore expects its own error messages in the tests (which mimic CPython's, including the name of the function), these tests fail now (look for "keywords must be strings" in the test log). Not sure if it's worth doing anything about this - checking the type at call time isn't really wrong, just different. However, this is one of the cases where it would be nice if PyPy simply included the function name in the error message. The same applies to the "got multiple values for keyword argument" error. Apart from the frame handling bug, the remaining problems look minor enough to say that we are really close to a point where Cython on PyPy becomes usable. Stefan From felipecruz at loogica.net Mon Apr 2 23:49:45 2012 From: felipecruz at loogica.net (Felipe Cruz) Date: Mon, 2 Apr 2012 18:49:45 -0300 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> Message-ID: Hello Roberto, If you need help with code or testing I would be glad in help. Is this work available in some repository? cheers Felipe 2012/4/1 Roberto De Ioris > > > Hi Roberto, > > > > On 27.03.2012, Roberto De Ioris wrote: > >> Hi everyone, i have finally managed to have a pypy plugin into uWSGI via > >> libpypy-c. > > > > Great! > > > > Are you still working on it? We can accept patches; for us it would > > also be cool if you can work on a clone of the repository and issue > > "pull requests" on bitbucket.org. > > > > > > Hi Armin, yes i am still working on it, i would like to have multithread > support working before sending other patches. > > > -- > Roberto De Ioris > http://unbit.it > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto at unbit.it Tue Apr 3 07:43:12 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Tue, 3 Apr 2012 07:43:12 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> Message-ID: <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> > Hello Roberto, > > If you need help with code or testing I would be glad in help. > > Is this work available in some repository? > > cheers > Felipe Hi, i have started committing here: https://bitbucket.org/pypy/pypy The current Py_Initialize() implementation is very skeletal. It should get some of the bin/py.py (included importing site.py) in the next few hours. -- Roberto De Ioris http://unbit.it From roberto at unbit.it Tue Apr 3 07:47:24 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Tue, 3 Apr 2012 07:47:24 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> Message-ID: <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> > >> Hello Roberto, >> >> If you need help with code or testing I would be glad in help. >> >> Is this work available in some repository? >> >> cheers >> Felipe > > Hi, i have started committing here: > > https://bitbucket.org/pypy/pypy Sorry, i mean https://bitbucket.org/unbit/pypy ;) > > The current Py_Initialize() implementation is very skeletal. It should get > some of the bin/py.py (included importing site.py) in the next few hours. > -- > Roberto De Ioris > http://unbit.it > -- Roberto De Ioris http://unbit.it From fijall at gmail.com Tue Apr 3 10:53:42 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 3 Apr 2012 10:53:42 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> Message-ID: On Tue, Apr 3, 2012 at 7:47 AM, Roberto De Ioris wrote: > >> >>> Hello Roberto, >>> >>> If you need help with code or testing I would be glad in help. >>> >>> Is this work available in some repository? >>> >>> cheers >>> Felipe >> >> Hi, i have started committing here: >> >> https://bitbucket.org/pypy/pypy > > Sorry, i mean > > https://bitbucket.org/unbit/pypy > > ;) > > >> >> The current Py_Initialize() implementation is very skeletal. It should get >> some of the bin/py.py (included importing site.py) in the next few hours. >> -- >> Roberto De Ioris >> http://unbit.it >> > > > -- > Roberto De Ioris > http://unbit.it > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Hi Isn't Py_Initialize clashing name with stuff exported from cpyext? Can it be named PyPy_Initialize or something? Cheers, fijal From roberto at unbit.it Tue Apr 3 11:12:02 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Tue, 3 Apr 2012 11:12:02 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> Message-ID: <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> > On Tue, Apr 3, 2012 at 7:47 AM, Roberto De Ioris wrote: >> >>> >>>> Hello Roberto, >>>> >>>> If you need help with code or testing I would be glad in help. >>>> >>>> Is this work available in some repository? >>>> >>>> cheers >>>> Felipe >>> >>> Hi, i have started committing here: >>> >>> https://bitbucket.org/pypy/pypy >> >> Sorry, i mean >> >> https://bitbucket.org/unbit/pypy >> >> ;) >> >> >>> >>> The current Py_Initialize() implementation is very skeletal. It should >>> get >>> some of the bin/py.py (included importing site.py) in the next few >>> hours. >>> -- >>> Roberto De Ioris >>> http://unbit.it >>> >> >> >> -- >> Roberto De Ioris >> http://unbit.it >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > Hi > > Isn't Py_Initialize clashing name with stuff exported from cpyext? Can > it be named PyPy_Initialize or something? > Py_Initialize() is only used for app embedding python. It is not usable (or used) in c extensions. For me there is no problem in renaming it, but i suppose bigger apps (like blender) would prefer avoiding #ifdef's :) -- Roberto De Ioris http://unbit.it From fijall at gmail.com Tue Apr 3 11:20:47 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 3 Apr 2012 11:20:47 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> Message-ID: On Tue, Apr 3, 2012 at 11:12 AM, Roberto De Ioris wrot > >> On Tue, Apr 3, 2012 at 7:47 AM, Roberto De Ioris wrote: >>> >>>> >>>>> Hello Roberto, >>>>> >>>>> If you need help with code or testing I would be glad in help. >>>>> >>>>> Is this work available in some repository? >>>>> >>>>> cheers >>>>> Felipe >>>> >>>> Hi, i have started committing here: >>>> >>>> https://bitbucket.org/pypy/pypy >>> >>> Sorry, i mean >>> >>> https://bitbucket.org/unbit/pypy >>> >>> ;) >>> >>> >>>> >>>> The current Py_Initialize() implementation is very skeletal. It should >>>> get >>>> some of the bin/py.py (included importing site.py) in the next few >>>> hours. >>>> -- >>>> Roberto De Ioris >>>> http://unbit.it >>>> >>> >>> >>> -- >>> Roberto De Ioris >>> http://unbit.it >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> http://mail.python.org/mailman/listinfo/pypy-dev >> >> Hi >> >> Isn't Py_Initialize clashing name with stuff exported from cpyext? Can >> it be named PyPy_Initialize or something? >> > > Py_Initialize() is only used for app embedding python. It is not usable > (or used) in c extensions. > > For me there is no problem in renaming it, but i suppose bigger apps (like > blender) would prefer avoiding #ifdef's :) > > > -- > Roberto De Ioris > http://unbit.it Ok I see. Is the rest of the API used going to be cpyext? If so, then Py_Initialize is indeed a perfect choice. From roberto at unbit.it Tue Apr 3 11:32:05 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Tue, 3 Apr 2012 11:32:05 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> Message-ID: <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> > > Ok I see. > > Is the rest of the API used going to be cpyext? If so, then > Py_Initialize is indeed a perfect choice. > I am about to add: Py_SetPythonHome Py_SetProgramName Py_Finalize i will put them into module/cpyext/src/pythonrun.c Do you think Py_Initialize should go there too ? -- Roberto De Ioris http://unbit.it From fijall at gmail.com Tue Apr 3 11:42:44 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 3 Apr 2012 11:42:44 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> Message-ID: On Tue, Apr 3, 2012 at 11:32 AM, Roberto De Ioris wrote: > > >> >> Ok I see. >> >> Is the rest of the API used going to be cpyext? If so, then >> Py_Initialize is indeed a perfect choice. >> > > > I am about to add: > > Py_SetPythonHome > Py_SetProgramName > Py_Finalize > > i will put them into > > module/cpyext/src/pythonrun.c > > Do you think Py_Initialize should go there too ? > > -- > Roberto De Ioris > http://unbit.it Sounds like a good idea. Should I merge the pull request now or wait for the others? Cheers, fijal From amauryfa at gmail.com Tue Apr 3 11:46:19 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 3 Apr 2012 11:46:19 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> Message-ID: 2012/4/3 Maciej Fijalkowski : > Isn't Py_Initialize clashing name with stuff exported from cpyext? Can > it be named PyPy_Initialize or something? cpyext does not provide Py_Initialize() yet. This patch should use more RPython code, and probably be moved to cpyext. -- Amaury Forgeot d'Arc From roberto at unbit.it Tue Apr 3 11:49:59 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Tue, 3 Apr 2012 11:49:59 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> Message-ID: <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> > On Tue, Apr 3, 2012 at 11:32 AM, Roberto De Ioris > wrote: >> >> >>> >>> Ok I see. >>> >>> Is the rest of the API used going to be cpyext? If so, then >>> Py_Initialize is indeed a perfect choice. >>> >> >> >> I am about to add: >> >> Py_SetPythonHome >> Py_SetProgramName >> Py_Finalize >> >> i will put them into >> >> module/cpyext/src/pythonrun.c >> >> Do you think Py_Initialize should go there too ? >> >> -- >> Roberto De Ioris >> http://unbit.it > > Sounds like a good idea. Should I merge the pull request now or wait > for the others? > > I think it is better to wait. Moving that to cpyext will avoid messing with translators (adding more exported symbols) too. -- Roberto De Ioris http://unbit.it From stefan_ml at behnel.de Tue Apr 3 21:21:57 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 03 Apr 2012 21:21:57 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: Stefan Behnel, 02.04.2012 10:35: > There is one major problem that accounts for the bulk of the test failures, > somehow related to frame handling. You can tell by the huge amount of long > traceback sequences that run into StackOverflowErrors and equivalent > RuntimeErrors. I found one test that, when run all by itself, triggers a RuntimeError, boldly claiming that it reached the maximum recursion depth after a couple of calls. There do not seem to be any frames or tracebacks involved up to that point. The test executes a C representation of the following code, run as a doctest: ''' class A: def append(self, x): print u"appending" return x def test_append(L): """ >>> test_append(A()) """ print L.append(1) ''' (The background is that Cython optimistically replaces calls to obj.append() by code that assumes that obj is a list, so this tests the failure of that assumption) It gives me this error: """ Traceback (most recent call last): File ".../pypy/lib-python/2.7/doctest.py", line 1254, in __run compileflags, 1) in test.globs File "", line 1, in _ = test_append(A()) File "append.pyx", line 34, in append.test_append (append.c:930) RuntimeError: maximum recursion depth exceeded """ The same happens when I call it manually, i.e. import append; append.test_append(append.A()) The basic C code that gets executed in the test_append() function is simply PyObject* m = PyObject_GetAttrString(L, "append"); r = PyObject_CallFunctionObjArgs(m, x, NULL); And that's where the error gets raised (returning r==NULL). Specifically, it does not enter into the actual append() method but fails before that, right at the call. I put a copy of the generated C file here: http://consulting.behnel.de/append.tgz Could someone shed some light on this? Stefan From amauryfa at gmail.com Tue Apr 3 22:49:33 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 3 Apr 2012 22:49:33 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: 2012/4/3 Stefan Behnel : > The basic C code that gets executed in the test_append() function is simply > > ? ?PyObject* m = PyObject_GetAttrString(L, "append"); > ? ?r = PyObject_CallFunctionObjArgs(m, x, NULL); > > And that's where the error gets raised (returning r==NULL). Specifically, > it does not enter into the actual append() method but fails before that, > right at the call. The issue is with CyFunctionType, which looks like a subclass of PyCFunction_Type. (it's a hack: normally this type is not subclassable, booo) L.append is such a CyFunctionType. Its tp_call slot is called, but this is defined to __Pyx_PyCFunction_Call which is #defined to PyObject_Call, which itself invokes the tp_call slot... A solution would be to access the "base" tp_call, the one that CPython exposes as PyCFunction_Call. Unfortunately cpyext only defines one tp_call shared by all types, one which simply delegates to self.__call__. This means that "calling the base slot" does not work very well with cpyext. There is a solution though, which I implemented a long time ago for the tp_setattro slot. It can be easily expanded to it's a hack: normally this type is not subclassable, booo) all slots but I'm a bit scared of the explosion of code this could generate. -- Amaury Forgeot d'Arc From fijall at gmail.com Tue Apr 3 23:07:38 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 3 Apr 2012 23:07:38 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: On Tue, Apr 3, 2012 at 10:49 PM, Amaury Forgeot d'Arc wrote: > 2012/4/3 Stefan Behnel : > > The basic C code that gets executed in the test_append() function is > simply > > > > PyObject* m = PyObject_GetAttrString(L, "append"); > > r = PyObject_CallFunctionObjArgs(m, x, NULL); > > > > And that's where the error gets raised (returning r==NULL). Specifically, > > it does not enter into the actual append() method but fails before that, > > right at the call. > > The issue is with CyFunctionType, which looks like a subclass of > PyCFunction_Type. > (it's a hack: normally this type is not subclassable, booo) > > L.append is such a CyFunctionType. > Its tp_call slot is called, but this is defined to __Pyx_PyCFunction_Call > which is #defined to PyObject_Call, which itself invokes the tp_call > slot... > > A solution would be to access the "base" tp_call, the one that CPython > exposes as PyCFunction_Call. > Unfortunately cpyext only defines one tp_call shared by all types, one > which simply delegates to self.__call__. > > This means that "calling the base slot" does not work very well with > cpyext. > There is a solution though, which I implemented a long time ago for > the tp_setattro slot. > It can be easily expanded to it's a hack: normally this type is not > subclassable, booo) > all slots but I'm a bit scared of the explosion of code this could > generate. > > -- > Amaury Forgeot d'Arc > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Hey I would like to point out that all the assumptions like "this type is not subclassable" or "this field is read only" might work on cpython, but the JIT will make assumptions based on that and it'll stop working or produce very occasional segfaults Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Tue Apr 3 23:42:58 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 3 Apr 2012 23:42:58 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: 2012/4/3 Maciej Fijalkowski : > I would like to point out that all the assumptions like "this type is not > subclassable" or "this field is read only" might work on cpython, but the > JIT will make assumptions based on that and it'll stop working or produce > very occasional segfaults Fortunately pypy objects are not exposed to C code, only a copy of fields and slots; and all transfers are explicitly written in RPython by cpyext. But no need to invoke the JIT here; with cpyext there can be only one object at a given address (!) and even if we had a PyCFunction_Call, it would not receive the correct interpreter object. This could be done of course, but with another hack. -- Amaury Forgeot d'Arc From aidembb at yahoo.com Wed Apr 4 02:14:51 2012 From: aidembb at yahoo.com (Roger Flores) Date: Tue, 3 Apr 2012 17:14:51 -0700 (PDT) Subject: [pypy-dev] pypy MemoryError crash Message-ID: <1333498491.32373.YahooMailNeo@web45509.mail.sp1.yahoo.com> Hello all.? A bug in a python application of mine resulted in this pypy crash: RPython traceback: ? File "translator_goal_targetpypystandalone.c", line 1033, in entry_point ? File "interpreter_function.c", line 1017, in funccall__star_1 ? File "interpreter_function.c", line 1046, in funccall__star_1 ? File "rpython_memory_gc_minimark.c", line 2512, in MiniMarkGC_collect_and_rese rve ? File "rpython_memory_gc_minimark.c", line 2216, in MiniMarkGC_minor_collection ? File "rpython_memory_gc_minimark.c", line 4503, in MiniMarkGC_collect_oldrefs_ to_nursery ? File "rpython_memory_gc_base.c", line 1714, in trace___trace_drag_out ? File "rpython_memory_gc_minimarkpage.c", line 214, in ArenaCollection_malloc ? File "rpython_memory_gc_minimarkpage.c", line 532, in ArenaCollection_allocate _new_page ? File "rpython_memory_gc_minimarkpage.c", line 728, in ArenaCollection_allocate _new_arena Fatal RPython error: MemoryError This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. At first I thought it was pypy reporting that the PPM compressor had ran out of memory, and simply didn't report the app's stack frames.? But I've since come to realize that normally pypy does report a MemoryError and the stack frames involved, just like Python, and that this case was instead a RPython crash. I'm hoping the above stack trace from pypy is enough of a clue. >pypy -v Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 18:31:47) [PyPy 1.8.0 with MSC v.1500 32 bit] on win32 -Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Wed Apr 4 05:42:16 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 04 Apr 2012 05:42:16 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 03.04.2012 22:49: > 2012/4/3 Stefan Behnel: >> The basic C code that gets executed in the test_append() function is simply >> >> PyObject* m = PyObject_GetAttrString(L, "append"); >> r = PyObject_CallFunctionObjArgs(m, x, NULL); >> >> And that's where the error gets raised (returning r==NULL). Specifically, >> it does not enter into the actual append() method but fails before that, >> right at the call. > > The issue is with CyFunctionType, which looks like a subclass of > PyCFunction_Type. Yes. It's required to make C implemented functions compatible with Python functions. Otherwise, they look and behave rather different in CPython, especially when used as methods (e.g. when assigned to class attributes after class creation). It's also faster for many things. > (it's a hack: normally this type is not subclassable, booo) Yep, that's a problem - works in CPython, though... > L.append is such a CyFunctionType. Ah, right - I should have tested with a class created at the Python level as well - that works. > Its tp_call slot is called, but this is defined to __Pyx_PyCFunction_Call > which is #defined to PyObject_Call, which itself invokes the tp_call slot... Interesting. Then that's the wrong thing to do in PyPy. I guess you just put it there in your original patch because PyPy doesn't expose PyCFunction_Call() and it seemed to be the obvious replacement. > A solution would be to access the "base" tp_call, the one that CPython > exposes as PyCFunction_Call. > Unfortunately cpyext only defines one tp_call shared by all types, one > which simply delegates to self.__call__. Makes sense for PyPy objects. > This means that "calling the base slot" does not work very well with cpyext. > There is a solution though, which I implemented a long time ago for > the tp_setattro slot. > It can be easily expanded to [...] > all slots but I'm a bit scared of the explosion of code this could generate. I consider it a rather special case that Cython subtypes PyCFunction_Type, so a general solution may not be necessary. Is there anything we can do on Cython side? We control the type and its tp_call slot, after all. You could also implement a fake PyCFunction_Call function specifically for this purpose. Or even just a PyPyCFunction_Call(). OTOH, I had suggested before that PyPy could eventually learn about Cython's function type and optimise for it. We could implement PEP 362 to include C type information in the annotations of the signature object, and PyPy could use that to pack and execute a direct C call to the underlying C function. By caching the signature mapping, that could bring the PyPy-to-Cython call overhead down to that of ctypes. Stefan From amauryfa at gmail.com Wed Apr 4 09:22:25 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 4 Apr 2012 09:22:25 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: 2012/4/4 Stefan Behnel : > I consider it a rather special case that Cython subtypes PyCFunction_Type, > so a general solution may not be necessary. Is there anything we can do on > Cython side? We control the type and its tp_call slot, after all. You could > also implement a fake PyCFunction_Call function specifically for this > purpose. Or even just a PyPyCFunction_Call(). Yes, this is the hack I was referring to in another thread: cpyext could implement PyCFunction_Call by not taking a w_obj as argument (this would require building an interpreter object from its address, which would not work because there is already a CyFunction pointer at the same address) but work directly with the PyCFunctionObject members (m_ml, m_self and m_module) without considering they come from a PyObject-derived structure. But this approach is not scalable - it has to be hand-written each time, and will not work at all for most kind of objects. -- Amaury Forgeot d'Arc From Ronny.Pfannschmidt at gmx.de Wed Apr 4 09:27:05 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 04 Apr 2012 09:27:05 +0200 Subject: [pypy-dev] pypy MemoryError crash In-Reply-To: <1333498491.32373.YahooMailNeo@web45509.mail.sp1.yahoo.com> References: <1333498491.32373.YahooMailNeo@web45509.mail.sp1.yahoo.com> Message-ID: <4F7BF7C9.901@gmx.de> Hello Roger, thanks for the initial report, but we could use some more information on what your program is doing (a rpython traceback is simply not enough of a clue to guess the problem) also it would be helpful, if you tried if the code works correct on a nightly build. Since you run on windows, also try on a linux to rule out platform issues. -- Ronny On 04/04/2012 02:14 AM, Roger Flores wrote: > Hello all. A bug in a python application of mine resulted in this pypy crash: > > RPython traceback: > File "translator_goal_targetpypystandalone.c", line 1033, in entry_point > File "interpreter_function.c", line 1017, in funccall__star_1 > File "interpreter_function.c", line 1046, in funccall__star_1 > File "rpython_memory_gc_minimark.c", line 2512, in MiniMarkGC_collect_and_rese > rve > File "rpython_memory_gc_minimark.c", line 2216, in MiniMarkGC_minor_collection > > File "rpython_memory_gc_minimark.c", line 4503, in MiniMarkGC_collect_oldrefs_ > to_nursery > File "rpython_memory_gc_base.c", line 1714, in trace___trace_drag_out > File "rpython_memory_gc_minimarkpage.c", line 214, in ArenaCollection_malloc > File "rpython_memory_gc_minimarkpage.c", line 532, in ArenaCollection_allocate > _new_page > File "rpython_memory_gc_minimarkpage.c", line 728, in ArenaCollection_allocate > _new_arena > Fatal RPython error: MemoryError > > This application has requested the Runtime to terminate it in an unusual way. > Please contact the application's support team for more information. > > > > At first I thought it was pypy reporting that the PPM compressor had ran out of memory, and simply didn't report the app's stack frames. But I've since come to realize that normally pypy does report a MemoryError and the stack frames involved, just like Python, and that this case was instead a RPython crash. > > I'm hoping the above stack trace from pypy is enough of a clue. > >> pypy -v > Python 2.7.2 (0e28b379d8b3, Feb 09 2012, 18:31:47) > [PyPy 1.8.0 with MSC v.1500 32 bit] on win32 > > > > -Roger > > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From amauryfa at gmail.com Wed Apr 4 09:27:15 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 4 Apr 2012 09:27:15 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: 2012/4/4 Stefan Behnel : >> Its tp_call slot is called, but this is defined to __Pyx_PyCFunction_Call >> which is #defined to PyObject_Call, which itself invokes the tp_call slot... > > Interesting. Then that's the wrong thing to do in PyPy. I guess you just > put it there in your original patch because PyPy doesn't expose > PyCFunction_Call() and it seemed to be the obvious replacement. I put it there to make the code compile... I did not even realize how it was used. >> A solution would be to access the "base" tp_call, the one that CPython >> exposes as PyCFunction_Call. >> Unfortunately cpyext only defines one tp_call shared by all types, one >> which simply delegates to self.__call__. > > Makes sense for PyPy objects. But not when those objects are subclassed: self.__call__ always accesses the most derived method, whereas a tp_call slot is supposed to point to the type being defined. -- Amaury Forgeot d'Arc From tismer at stackless.com Wed Apr 4 12:47:23 2012 From: tismer at stackless.com (Christian Tismer) Date: Wed, 04 Apr 2012 12:47:23 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> Message-ID: <4F7C26BB.1070701@stackless.com> On 4/1/12 11:23 AM, Armin Rigo wrote: > Hi Andrew, hi all, > > On Wed, Mar 28, 2012 at 18:23, Andrew Francis wrote: >>> Indeed, and it was around 2007, so I expect the authors to have been >>> involved in completely different things for quite some time now... >>> But I could try to contact them anyway. >> Communications is good :-) > I'm also thinking about writing a short paper collecting things I said > and think on various blog posts. A kind of "position paper". What do > others think of this idea? > >> My PyPy knowledge is still sketchy but I am changing that. I do understand >> the Twisted reactor model >> (thanks to my 2008 Pycon Talk) so I could follow discussions in that area. >> Is this discussed on IRC? > This is not discussed a lot right now. But it is apparently > relatively easy to adapt the epoll-based Twisted reactor to use the > 'transaction' module. (Again, this module is present in the stm-gc > branch; look for lib_pypy/transaction.py for the interface, and > pypy/module/transaction/* for the Python implementation on top of STM > as exposed by RPython.) This 'transaction' module is also meant to be > used directly, for example in this kind of Python code: > > for n in range(...): > do_something(n) > > If each call to do_something() has "reasonable chances" to be > independent from other calls, and if the order doesn't matter, then it > can be rewritten as: > > for n in range(...): > transaction.add(do_something, n) > transaction.run() > > In addition, each transaction can add more transactions that will be > run after it. So if you want to play with lib_pypy/stackless.py to > add calls to 'transaction', feel free :-) Maybe it will show that a > slightly different API is required from the 'transaction' module; I > don't really know so far. > Hi A(rmin|ndrew), it is funny how familiar this code looks, re-writing it in terms of Stackless and tasklets: ''' for n in range(...): tasklet(do_something)(n) stackless.run() In addition, each tasklet can add more tasklets that will be run after it. So if you (...) ''' Well, sure, it is not much more than a simple match, and the tasklets are more like sequences of transactions, when they give up their computation by stackless.schedule(), that adds them back to the pool. Anyway, the underlying ideas have similarities that make me think a lot. Thinking of Stackless, I was looking for a way to isolate tasklets in a way to let them run in parallel, as long as they are independent. In STM, independence is enforced, currently at a relatively high price. If Stackless were able to provide some better isolation by design, maybe there could be a hybrid approach, where most operations would not need to rely on STM all the time? Just rolling ideas out -- Chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From arigo at tunes.org Wed Apr 4 15:11:57 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 4 Apr 2012 15:11:57 +0200 Subject: [pypy-dev] pypy MemoryError crash In-Reply-To: <4F7BF7C9.901@gmx.de> References: <1333498491.32373.YahooMailNeo@web45509.mail.sp1.yahoo.com> <4F7BF7C9.901@gmx.de> Message-ID: Hi Ronny, hi Roger, On Wed, Apr 4, 2012 at 09:27, Ronny Pfannschmidt wrote: > thanks for the initial report, > but we could use some more information on what your program is doing > (a rpython traceback is simply not enough of a clue to guess the problem) Indeed, we would need to know more. Normally, the RPython MemoryError is caught and turned into a Python-level MemoryError at some point. But according to the traceback the interpreter is not running any Python code at this point. It is either a corner case, or a real bug, depending on how your Python code looks like. It may also mean that you're running with a too strict memory restriction (with the environment variable PYPY_GC_MAX=value too small). A bient?t, Armin. From roberto at unbit.it Wed Apr 4 17:55:45 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 4 Apr 2012 17:55:45 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> Message-ID: > >> On Tue, Apr 3, 2012 at 11:32 AM, Roberto De Ioris >> wrote: >>> >>> >>>> >>>> Ok I see. >>>> >>>> Is the rest of the API used going to be cpyext? If so, then >>>> Py_Initialize is indeed a perfect choice. >>>> >>> >>> >>> I am about to add: >>> >>> Py_SetPythonHome >>> Py_SetProgramName >>> Py_Finalize >>> >>> i will put them into >>> >>> module/cpyext/src/pythonrun.c >>> >>> Do you think Py_Initialize should go there too ? >>> >>> -- >>> Roberto De Ioris >>> http://unbit.it >> >> Sounds like a good idea. Should I merge the pull request now or wait >> for the others? >> >> > > I think it is better to wait. Moving that to cpyext will avoid messing > with translators (adding more exported symbols) too. > > Ok, i am pretty satisfied with the current code (i have made a pull request). I have implemented: Py_Initialize Py_Finalize Py_SetPythonHome Py_SetProgramName all as rpython-cpyext except for Py_Initialize being splitted in a c part (it requires a call to RPython_StartupCode) Successfully tested with current uWSGI tip. Py_SetPythonHome add flawless support for virtualenv. -- Roberto De Ioris http://unbit.it From arigo at tunes.org Thu Apr 5 11:18:50 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 5 Apr 2012 11:18:50 +0200 Subject: [pypy-dev] PyPy sprint in Leipzig, Germany (June 22-27) Message-ID: Leipzig PyPy sprint June 22 - 27, 2012 ====================================== The next PyPy sprint will be held --- for the first time in a while --- in a place where we haven't been so far: Leipzig, Germany, at the Python Academy's Teaching Center. It will take place from the 22nd to the 27th of June 2012, before EuroPython. Thanks to Mike M?ller for organizing it! .. Python Academy: http://www.python-academy.com/ This is a fully public sprint, everyone is welcome to join us. All days are full sprint days, so it is recommended to arrive the 21st and leave the 28th. Topics and goals ---------------- Open. Here are some goals: - numpy: progress towards completing the ``numpypy`` module; try to use it in real code - stm: progress on Transactional Memory; try out the ``transaction`` module on real code. - jit optimizations: there are a number of optimizations we can still try out or refactor. - work on various, more efficient data structures for Python language. A good example would be lazy string slicing/concatenation or more efficient objects. - any other PyPy-related topic is fine too. Location -------- Python Academy Leipzig, Germany. http://www.python-academy.com/center/find.html Thanks to Mike M?ller for inviting us. The room holds 8 - 12 people depending how much space each individual needs. Drinks, snacks and a simple lunch, e.g. pizza or other fast food is provided. Dinner would be extra. Some accommodations: http://www.python-academy.com/center/accommodation.html Pretty much all hotels and hostels in the city center are fine too. The tram needs only 16 minutes from the central station to the Academy location. Grants ------ For students, we have the possibility to support some costs via PyPy funds. Additionally, we can support you applying for grants from the PSF and other sources. Registration ------------ If you'd like to come, please *sign up* either by announcing yourself on the pypy-dev mailing list, or by directly adding yourself to the list of people: https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leipzig2012/people.txt. We need to have a head count for the organization. If you are new to the project please drop a note about your interests and post any questions. From arigo at tunes.org Thu Apr 5 11:24:17 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 5 Apr 2012 11:24:17 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: <4F7C26BB.1070701@stackless.com> References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <4F7C26BB.1070701@stackless.com> Message-ID: Hi Christian, On 04.04.2012, Christian Tismer wrote: > it is funny how familiar this code looks, re-writing it in terms of > Stackless and tasklets: Yes, that similarity is not accidental :-) It looks like it's "just" a matter of hacking at lib_pypy/stackless.py to use ``transaction``. But as I said it's possible that some changes to the API of ``transaction`` would be needed for an easier mapping. Feel free to try it out. > In STM, independence is enforced, currently at a relatively high > price. > > If Stackless were able to provide some better isolation by design, > maybe there could be a hybrid approach, where most operations would > not need to rely on STM all the time? Maybe. It's a language design question though, and based on future technical work, so I'll not consider it for now. The point of STM is that it allows anything, and once we have a good implementation, we can start tweaking other aspects (like the language) to improve performance. A bient?t, Armin. From andrewfr_ice at yahoo.com Sun Apr 8 01:25:32 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Sat, 7 Apr 2012 16:25:32 -0700 (PDT) Subject: [pypy-dev] The Work Plan Re: STM proposal funding References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> Message-ID: <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> Hi Armin: ________________________________ From: Armin Rigo To: Andrew Francis Cc: PyPy Developer Mailing List ; "stackless at stackless.com" Sent: Sunday, April 1, 2012 5:23 AM Subject: Re: The Work Plan Re: [pypy-dev] STM proposal funding >I'm also thinking about writing a short paper collecting things I said >and think on various blog posts.? A kind of "position paper".? What do >others think of this idea? I think this is a great idea. >This is not discussed a lot right now.? But it is apparently >relatively easy to adapt the epoll-based Twisted reactor to use the >'transaction' module.? (Again, this module is present in the stm-gc >branch; look for lib_pypy/transaction.py for the interface, and >pypy/module/transaction/* for the Python implementation on top of STM >as exposed by RPython.) In addition, each transaction can add more transactions that will be >run after it.? So if you want to play with lib_pypy/stackless.py to >add calls to 'transaction', feel free :-)? Maybe it will show that a >slightly different API is required from the 'transaction' module; I >don't really know so far. Yes I can see transaction.add() being a wrapper/decorator for tasklet creation. However I think that is the easy part. I'm trying to reason about the behaviour. Starting with a simple Stackless programme: 1) All tasklets run on top of a single OS thread. 2) Tasklets do not yield until they are ready to commit, that is they do not call schedule() or block on a channel . 3) Tasklets do not share state/ or variables are immutable ( because of #1 and #2, this isn't important) This is a natural transaction A more complicated but somewhat contrived scenario: 1) tasklets are still running over a single OS thread 2) tasklets yield 3) tasklets share state def transactionA(account1, account2): ??? account1.fromAccount -= 50 ??? if someRandomFunction(): ?? ? ? schedule() ??? account2.toAccount += 50 ?? def transactionB(account1, account2): ??? t = arg.fromAccount * .1 ?? account1.fromAccount -= t ?? if someRandomFunction(): ????? schedule() ?? account2.toAccount += t since the tasklets yield, this opens the door for race conditions. I need to look at how the underlying rstm module works to see how easy it would be detect conflicts amongst tasklets. another scenario: def transferTasklet(ch) ????? .... ????? while someFlag: ?????????????? toAcount, fromAccount, amount = ch.receive() ?????????????? # transaction start ??????????????? fromAccount.withdraw(amount) ??????????????? toAccount.deposit(amount) ?????????????? #transaction end Question: without specific transaction_start() and transaction_commit() calls, how does rstm know what the start and finish of transactions are? Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewfr_ice at yahoo.com Sun Apr 8 02:01:18 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Sat, 7 Apr 2012 17:01:18 -0700 (PDT) Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: <4F7C26BB.1070701@stackless.com> References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <4F7C26BB.1070701@stackless.com> Message-ID: <1333843278.79507.YahooMailNeo@web120701.mail.ne1.yahoo.com> Hi Christian: ________________________________ From: Christian Tismer To: Armin Rigo Cc: Andrew Francis ; "stackless at stackless.com" ; PyPy Developer Mailing List Sent: Wednesday, April 4, 2012 6:47 AM Subject: Re: [pypy-dev] The Work Plan Re: STM proposal funding ... >Anyway, the underlying ideas have similarities that make me think a lot. >Thinking of Stackless, I was looking for a way to isolate tasklets in >a way to let them run in parallel, as long as they are independent. >In STM, independence is enforced, currently at a relatively high >price. >If Stackless were able to provide some better isolation by design, >maybe there could be a hybrid approach, where most operations would >not need to rely on STM all the time? >Just rolling ideas out? -- Chris The idea I like the most is to use STM and lock-free algorithms for the implementation of the channels themselves. Again, the Scalable Join Patterns and Parallel ML? papers are the inspiration for this approach. In contrast I have looked at Go's channel implementation and it has to do stuff like sorting to get the correct locking order.? What I like is that the approach assumes that Stackless programmers know how to write programmes that are fairly isolated. One? could experiment with this approach using the low-level rstm module or prototypes written in C using existing STM and lock-free libraries. Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Sun Apr 8 14:53:27 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sun, 08 Apr 2012 14:53:27 +0200 Subject: [pypy-dev] [pypy-commit] pypy win32-cleanup2: fix test, catching raised exception with 'raises' fails on win32 In-Reply-To: <20120408110858.E5CD482111@wyvern.cs.uni-duesseldorf.de> References: <20120408110858.E5CD482111@wyvern.cs.uni-duesseldorf.de> Message-ID: <4F818A47.1070204@gmail.com> Hi Matti, On 04/08/2012 01:08 PM, mattip wrote: > Author: Matti Picus > Branch: win32-cleanup2 > Changeset: r54246:36ff8218c07d > Date: 2012-04-08 13:30 +0300 > http://bitbucket.org/pypy/pypy/changeset/36ff8218c07d/ > > Log: fix test, catching raised exception with 'raises' fails on win32 > > diff --git a/pypy/module/_socket/test/test_sock_app.py b/pypy/module/_socket/test/test_sock_app.py > --- a/pypy/module/_socket/test/test_sock_app.py > +++ b/pypy/module/_socket/test/test_sock_app.py > @@ -617,7 +617,10 @@ > except timeout: > pass > # test sendall() timeout > - raises(timeout, cli.sendall, 'foobar' * 70) > + try: > + cli.sendall('foobar'*70) > + except: > + pass isn't it better to use a more precise "except"? The way it's written now the test would pass even if there is a typo. ciao, Anto From matti.picus at gmail.com Sun Apr 8 15:03:36 2012 From: matti.picus at gmail.com (Matti Picus) Date: Sun, 08 Apr 2012 16:03:36 +0300 Subject: [pypy-dev] Reducing the number of failing tests on win32 Message-ID: <4F818CA8.7070605@gmail.com> An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Sun Apr 8 15:58:14 2012 From: matti.picus at gmail.com (Matti Picus) Date: Sun, 08 Apr 2012 16:58:14 +0300 Subject: [pypy-dev] [pypy-commit] pypy win32-cleanup2: fix test, catching raised exception with 'raises' fails on win32 Message-ID: <4F819976.8030007@gmail.com> > Hi Matti, > > On 04/08/2012 01:08 PM, mattip wrote: > >/ Author: Matti Picus> > />/ Branch: win32-cleanup2 > />/ Changeset: r54246:36ff8218c07d > />/ Date: 2012-04-08 13:30 +0300 > />/ http://bitbucket.org/pypy/pypy/changeset/36ff8218c07d/ > />/ > />/ Log: fix test, catching raised exception with 'raises' fails on win32 > />/ > />/ diff --git a/pypy/module/_socket/test/test_sock_app.py b/pypy/module/_socket/test/test_sock_app.py > />/ --- a/pypy/module/_socket/test/test_sock_app.py > />/ +++ b/pypy/module/_socket/test/test_sock_app.py > />/ @@ -617,7 +617,10 @@ > />/ except timeout: > />/ pass > />/ # test sendall() timeout > />/ - raises(timeout, cli.sendall, 'foobar' * 70) > />/ + try: > />/ + cli.sendall('foobar'*70) > />/ + except: > />/ + pass > / > isn't it better to use a more precise "except"? The way it's written now the > test would pass even if there is a typo. > > ciao, > Anto yes, thanks. changeset c0ded41a61e0 Matti From arigo at tunes.org Sun Apr 8 16:01:14 2012 From: arigo at tunes.org (Armin Rigo) Date: Sun, 8 Apr 2012 16:01:14 +0200 Subject: [pypy-dev] [pypy-commit] pypy win32-cleanup2: fix test, catching raised exception with 'raises' fails on win32 In-Reply-To: <4F818A47.1070204@gmail.com> References: <20120408110858.E5CD482111@wyvern.cs.uni-duesseldorf.de> <4F818A47.1070204@gmail.com> Message-ID: Hi Matti, On Sun, Apr 8, 2012 at 14:53, Antonio Cuni wrote: >> - ? ? ? ?raises(timeout, cli.sendall, 'foobar' * 70) >> + ? ? ? ?try: >> + ? ? ? ? ? ?cli.sendall('foobar'*70) >> + ? ? ? ?except: >> + ? ? ? ? ? ?pass > > isn't it better to use a more precise "except"? The way it's written now the > test would pass even if there is a typo. Why did you have to change it? It seems to me that the old way is precisely equivalent to "except timeout:", and I doubt it's some win32-specific issue. There must be another deeper issue. A bient?t, Armin. From fijall at gmail.com Sun Apr 8 16:11:16 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 8 Apr 2012 16:11:16 +0200 Subject: [pypy-dev] Reducing the number of failing tests on win32 In-Reply-To: <4F818CA8.7070605@gmail.com> References: <4F818CA8.7070605@gmail.com> Message-ID: On Sun, Apr 8, 2012 at 3:03 PM, Matti Picus wrote: > I have done some cleaning house (it's that time of year) with win32, on > the win32-cleanup2 branch. Here are the issues I "handled" by brutally > skipping or rewriting tests: > > - test_recv_send_timeout in test_sock_app was raising the proper > exception, but for some reason raises() was not catching it in the test. > > - test_byte_order in _multiprocessing/test/test_connection called > socket.from_fd which does not exist except in unix > > - testing time.localtime called time.localtime(-1) which raises an error > > - XML_GetCurrentLineNumber() in pyexpat crashes python after an exception > is thrown in the MalformedInputText > > test. > > I am still working through why float('nan') sometimes has the signbit set, > making copysign(1., float('nan')) return -1 and will eventually add back in > the test in math/test/test_math, but I'm not sure how reliable copysign on > nan is anyway. > > I would be happy to get feedback as to the validity of the changes I have > made, hopefully they will show up soon on the buildbot: > > http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/468 > > Matti > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > Hey. I can't really asses the validity of windows-related changes, but I'm really glad not only amaury cares about windows :) Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sun Apr 8 16:27:35 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 08 Apr 2012 16:27:35 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: Amaury Forgeot d'Arc, 04.04.2012 09:27: > 2012/4/4 Stefan Behnel: >>> Its tp_call slot is called, but this is defined to __Pyx_PyCFunction_Call >>> which is #defined to PyObject_Call, which itself invokes the tp_call slot... >> >> Interesting. Then that's the wrong thing to do in PyPy. I guess you just >> put it there in your original patch because PyPy doesn't expose >> PyCFunction_Call() and it seemed to be the obvious replacement. > > I put it there to make the code compile... I did not even realize how > it was used. I found that cpyext already exports everything that is required to run the original PyCFunction_Call() function from CPython, so I just copied it over and it seems to work just fine. https://github.com/scoder/cython/commit/a286151e998d9b6d0888b63370fb5d4a1d707b81 That means that there's nothing to do on the PyPy side for this. It also means that we have the majority of tests passing or at least basically working now - only 80 out of the more than 2000 tests are currently failing. However, there are still crashers and there are also still failures that need further investigation. Crashers are best looked up in the log of the forked test runs: https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly-safe/lastBuild/consoleFull The results of completed test runs find their way into the web interface: https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/ Stefan From bokr at oz.net Sun Apr 8 18:58:13 2012 From: bokr at oz.net (Bengt Richter) Date: Sun, 08 Apr 2012 09:58:13 -0700 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: References: Message-ID: <4F81C3A5.2010507@oz.net> On 04/08/2012 07:27 AM Stefan Behnel wrote: > > Crashers are best looked up in the log of the forked test runs: > > https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly-safe/lastBuild/consoleFull > > The results of completed test runs find their way into the web interface: > > https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/ > > Stefan Firefox warns strenuously that the certificate for your https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/ https site might not be trustable. Do you suggest just telling firefox to make an exception? I haven't made any exceptions before, so I have some resistance ... (not implying here that I wouldn't trust you personally ;-) What do pypyers do? Certificate serial 00:9C:BE:68:16:2B:98:E4:46 expired 05/22/2010 01:47:22 (05/22/2010 08:47:22 GMT) (apparently a month after it was created). Regards, Bengt Richter PS. Don't go to any extra trouble to make the info accessible for me -- I was just curious. From stefan_ml at behnel.de Sun Apr 8 19:09:50 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 08 Apr 2012 19:09:50 +0200 Subject: [pypy-dev] Findings in the Cython test suite In-Reply-To: <4F81C3A5.2010507@oz.net> References: <4F81C3A5.2010507@oz.net> Message-ID: Bengt Richter, 08.04.2012 18:58: > On 04/08/2012 07:27 AM Stefan Behnel wrote: >> Crashers are best looked up in the log of the forked test runs: >> >> https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly-safe/lastBuild/consoleFull >> >> The results of completed test runs find their way into the web interface: >> >> https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/ > > Firefox warns strenuously that the certificate for your > > https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/ > https site might not be trustable. > > Do you suggest just telling firefox to make an exception? > I haven't made any exceptions before, so I have some resistance ... Get used to it. :) Seriously, what do you care about the certificate when all you want is to read the page? It's not like you're going to send any sensitive data over. (And it won't do anything with that data even if you did.) I'm personally rather annoyed by the way Firefox treats broken/untrusted/self-signed certificates. It should warn when you actually do send data over, not right on connecting. In 99% of the cases, I really don't care what server I am connecting to. > (not implying here that I wouldn't trust you personally ;-) > What do pypyers do? > > Certificate serial 00:9C:BE:68:16:2B:98:E4:46 > expired > 05/22/2010 01:47:22 > (05/22/2010 08:47:22 GMT) > (apparently a month after it was created). Interesting, guess we should fix that. Thanks for noting. Anyway, the only reason it's using a certificate is to let the core developers access it through an encrypted connection. The certificate itself is rather uninteresting to "normal" users. Stefan From matti.picus at gmail.com Tue Apr 10 06:46:19 2012 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 10 Apr 2012 07:46:19 +0300 Subject: [pypy-dev] multiple versions of Python.h Message-ID: <4F83BB1B.6090101@gmail.com> Two copies of Python.h exist: pypy\include\Python.h pypy\module\cpyext\include\Python.h When translating targetpypystandalone on win32, the created Makefile prefers the cpyext\include. It appears that the CConfig class used to create the Makefile is the one in module/cpyext/api.py (since I see the flag Py_BUILD_CORE which occurs only there), and the include_dirs there does not pull in the first path. Two questions: 1. Do we really want two copies of this header in our tree 2. If so, shouldn't the first one be given precedence? From matti.picus at gmail.com Tue Apr 10 07:32:21 2012 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 10 Apr 2012 08:32:21 +0300 Subject: [pypy-dev] multiple versions of Python.h In-Reply-To: <4F83BB1B.6090101@gmail.com> References: <4F83BB1B.6090101@gmail.com> Message-ID: <4F83C5E5.1020404@gmail.com> An HTML attachment was scrubbed... URL: From leo at cogsci.ucsd.edu Wed Apr 11 02:23:06 2012 From: leo at cogsci.ucsd.edu (Leo Trottier) Date: Tue, 10 Apr 2012 17:23:06 -0700 Subject: [pypy-dev] PyPy as part of a larger, bundled project? Message-ID: Hi, A number of Python applications (e.g. http://calibre-ebook.com/, http://www.psychopy.org/ ... http://en.wikipedia.org/wiki/List_of_Python_software#Applications) are deployed together with the libraries and interpreter that they will use. Often, these applications are larger, and can end up performing operations that are computationally intensive. In the case of Calibre, e.g., large batch conversions from one book format to another can take more than an hour (for sufficiently large batches). This motivates a couple questions: 1) How difficult might it be, currently, to swap in PyPy[1] as the interpreter for, say, Calibre (http://calibre-ebook.com/download_linux)? (I am in the process of trying to do so presently, but the question is meant to be a bit more general) 2) Might these larger, interpreter-bundled applications be good targets for "high impact" deployments of PyPy? PyPy could, here, provide a tangible benefit without requiring any extra work by end-users, thereby potentially serving as a useful demonstration platform while quickly increasing PyPy usage. Leo [1] https://bugs.launchpad.net/calibre/+bug/977453 -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lavrenov at gmail.com Wed Apr 11 08:41:19 2012 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Wed, 11 Apr 2012 10:41:19 +0400 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> Message-ID: Hello everyone! I got some errors while i was building the embedded-pypy branch with python translate.py -Ojit --shared. Could anybody help me with it, please? The defautl branch builds without any errors. [translation:ERROR] In file included from debug_print.c:16:0: [translation:ERROR] common_header.h:18:0: warning: "_POSIX_C_SOURCE" redefined [enabled by default] [translation:ERROR] /usr/include/features.h:215:0: note: this is the location of the previous definition [translation:ERROR] In file included from ../module_cache/module_11.c:157:0: [translation:ERROR] /home/e-max/workspace/pypy/pypy/translator/c/src/dtoa.c:132:0: warning: "PyMem_Malloc" redefined [enabled by default] [translation:ERROR] /home/e-max/workspace/pypy/pypy/module/cpyext/include/pymem.h:8:0: note: this is the location of the previous definition [translation:ERROR] /home/e-max/workspace/pypy/pypy/translator/c/src/dtoa.c:133:0: warning: "PyMem_Free" redefined [enabled by default] [translation:ERROR] /home/e-max/workspace/pypy/pypy/module/cpyext/include/pymem.h:9:0: note: this is the location of the previous definition [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 2017, in [translation:ERROR] tracker.process(f, g, filename=fn) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 1910, in process [translation:ERROR] tracker = parser.process_function(lines, filename) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 1425, in process_function [translation:ERROR] table = tracker.computegcmaptable(self.verbose) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 52, in computegcmaptable [translation:ERROR] self.parse_instructions() [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 207, in parse_instructions [translation:ERROR] insn = meth(line) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 1047, in visit_jmp [translation:ERROR] return FunctionGcRootTracker.visit_jmp(self, line) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 710, in visit_jmp [translation:ERROR] raise NoPatternMatch(repr(self.lines[tablelin])) [translation:ERROR] __main__.NoPatternMatch: '\t.long\t.L370-.L376\n' [translation:ERROR] make: *** [jit_backend_llsupport_descr.gcmap] Error 1 [translation:ERROR] make: *** Waiting for unfinished jobs.... [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 2017, in [translation:ERROR] tracker.process(f, g, filename=fn) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 1910, in process [translation:ERROR] tracker = parser.process_function(lines, filename) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 1425, in process_function [translation:ERROR] table = tracker.computegcmaptable(self.verbose) [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 65, in computegcmaptable [translation:ERROR] return self.gettable() [translation:ERROR] File "/home/e-max/workspace/pypy/pypy/translator/c/gcc/trackgcroot.py", line 99, in gettable [translation:ERROR] localvar) [translation:ERROR] AssertionError: pypy_g_generate_tokens: %r10 [translation:ERROR] make: *** [interpreter_pyparser_pytokenizer.gcmap] Error 1 [translation:ERROR] """) On Wed, Apr 4, 2012 at 19:55, Roberto De Ioris wrote: > > > > >> On Tue, Apr 3, 2012 at 11:32 AM, Roberto De Ioris > >> wrote: > >>> > >>> > >>>> > >>>> Ok I see. > >>>> > >>>> Is the rest of the API used going to be cpyext? If so, then > >>>> Py_Initialize is indeed a perfect choice. > >>>> > >>> > >>> > >>> I am about to add: > >>> > >>> Py_SetPythonHome > >>> Py_SetProgramName > >>> Py_Finalize > >>> > >>> i will put them into > >>> > >>> module/cpyext/src/pythonrun.c > >>> > >>> Do you think Py_Initialize should go there too ? > >>> > >>> -- > >>> Roberto De Ioris > >>> http://unbit.it > >> > >> Sounds like a good idea. Should I merge the pull request now or wait > >> for the others? > >> > >> > > > > I think it is better to wait. Moving that to cpyext will avoid messing > > with translators (adding more exported symbols) too. > > > > > > Ok, i am pretty satisfied with the current code (i have made a pull > request). > > I have implemented: > > Py_Initialize > Py_Finalize > Py_SetPythonHome > Py_SetProgramName > > all as rpython-cpyext except for Py_Initialize being splitted in a c part > (it requires a call to RPython_StartupCode) > > Successfully tested with current uWSGI tip. Py_SetPythonHome add flawless > support for virtualenv. > > > > -- > Roberto De Ioris > http://unbit.it > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Apr 11 09:22:51 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 11 Apr 2012 09:22:51 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> Message-ID: Hi, 2012/4/11 Max Lavrenov > Hello everyone! > > I got some errors while i was building the embedded-pypy branch with > python translate.py -Ojit --shared. > Could anybody help me with it, please? > trackgcroot.py does not recognize some constructs used when compiling with -fPIC. I thought I fixed them though... Anyway it will crash at runtime: because of code relocation, function pointers are actually addresses into a translation table, which contains the real code address. I already fixed this for Windows a long time ago. Meanwhile, the best thing to do is to avoid assembler magic, and translate with the option: --gcrootfinder=shadowstack -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Wed Apr 11 09:24:52 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 11 Apr 2012 09:24:52 +0200 Subject: [pypy-dev] PyPy as part of a larger, bundled project? In-Reply-To: References: Message-ID: Leo Trottier, 11.04.2012 02:23: > A number of Python applications (e.g. http://calibre-ebook.com/, > http://www.psychopy.org/ ... > http://en.wikipedia.org/wiki/List_of_Python_software#Applications) are > deployed together with the libraries and interpreter that they will use. > > Often, these applications are larger, and can end up performing operations > that are computationally intensive. In the case of Calibre, e.g., large > batch conversions from one book format to another can take more than an > hour (for sufficiently large batches). Are you sure the bottleneck is in Python code here? PyPy won't magically speed up image conversions for you, for example. You can expect it to be faster for HTML processing with its bundled html5lib, though, and maybe also PDF generation, which it seems to be using pyPDF for. However, for XML processing, which I would expect to be a substantial part of the work when converting between e-book formats, it appears to be using lxml - you can't beat that with PyPy. Calibre likely won't run in PyPy directly as the GUI uses PyQT4 and it also uses extension modules for plugins. So I'm rather confident that it will not be easy to make it work at all with PyPy, or even to make any of the more interesting conversion pipelines work entirely in PyPy. You can still give it a try, though. Maybe you can manage to get at least an HTML-to-PDF pipeline working by forking off an external PyPy process and porting the libraries. However, you seem to be more interested in making it run fast than in making it run in PyPy. Your time may better be invested into pushing more parallel processing into the right places. You mentioned batch processing, that sounds like the bulk of the workload is trivially parallelisable. And maybe a bit of profiling against your specific processing needs would hint at a specific bottleneck that's easy to fix? Stefan From Ronny.Pfannschmidt at gmx.de Wed Apr 11 11:02:05 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 11 Apr 2012 11:02:05 +0200 Subject: [pypy-dev] changing the stdlib tracking Message-ID: <4F85488D.8080506@gmx.de> hi, since its kind of troublesome to deal with stdlib vs modified-stdlib, i'd like to propose an new model for tracking our changes to the stdlib the unmodified stdlib would be in a branch, say vendor/stdlib, while our modifications will get inlined in the default branch that way we get easier merging and easier diffing + we might be able to supply patches to cpython more easily the same approach can also be applied to pylib/pytest so i propose 2 vendor branches for tracking unmodified versions of libs we use * vendor/stdlib * vendor/pytest and of course the inlining of modified-stdlib unless there are complaints i would implement the changes in a few days in a branch -- Ronny From amauryfa at gmail.com Wed Apr 11 11:08:30 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 11 Apr 2012 11:08:30 +0200 Subject: [pypy-dev] changing the stdlib tracking In-Reply-To: <4F85488D.8080506@gmx.de> References: <4F85488D.8080506@gmx.de> Message-ID: 2012/4/11 Ronny Pfannschmidt > hi, > > since its kind of troublesome to deal with stdlib vs modified-stdlib, > > i'd like to propose an new model for tracking our changes to the stdlib > > the unmodified stdlib would be in a branch, say vendor/stdlib, > while our modifications will get inlined in the default branch > > that way we get easier merging and easier diffing + we might be able to > supply patches to cpython more easily > > the same approach can also be applied to pylib/pytest > > > so i propose 2 vendor branches for tracking unmodified versions of libs we > use > > * vendor/stdlib > * vendor/pytest > > and of course the inlining of modified-stdlib > > unless there are complaints i would implement the changes in a few days in > a branch > +1. The two directories cause issues in some applications. Does it make sense to merge lib_pypy as well? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ronny.Pfannschmidt at gmx.de Wed Apr 11 12:35:59 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Wed, 11 Apr 2012 12:35:59 +0200 Subject: [pypy-dev] changing the stdlib tracking In-Reply-To: References: <4F85488D.8080506@gmx.de> Message-ID: <4F855E8F.4090308@gmx.de> On 04/11/2012 11:08 AM, Amaury Forgeot d'Arc wrote: > 2012/4/11 Ronny Pfannschmidt > >> hi, >> >> since its kind of troublesome to deal with stdlib vs modified-stdlib, >> >> i'd like to propose an new model for tracking our changes to the stdlib >> >> the unmodified stdlib would be in a branch, say vendor/stdlib, >> while our modifications will get inlined in the default branch >> >> that way we get easier merging and easier diffing + we might be able to >> supply patches to cpython more easily >> >> the same approach can also be applied to pylib/pytest >> >> >> so i propose 2 vendor branches for tracking unmodified versions of libs we >> use >> >> * vendor/stdlib >> * vendor/pytest >> >> and of course the inlining of modified-stdlib >> >> unless there are complaints i would implement the changes in a few days in >> a branch >> > > +1. The two directories cause issues in some applications. > Does it make sense to merge lib_pypy as well? > parts of lib_pypy make sense to move (since they map stdlib stuff), but the parts that are not stdlib should stay it needs more discussion in particular pyrepl and that strange distributed thing need to stay From arigo at tunes.org Wed Apr 11 13:28:48 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 11 Apr 2012 13:28:48 +0200 Subject: [pypy-dev] STM/AME for CPython? Message-ID: Hi all, Countrary to what is written in the STM/AME proposal, since last week I believe it might be (reasonably) possible to apply the same techniques to CPython. For now I am experimenting with applying them in a simple CPython-like interpreter. If it works, it might end up as a patch to the core parts of CPython. The interesting property is that it would still be able to run unmodified C extension modules --- the Python code gets the benefits of multi-core STM/AME only if it involves only the patched parts of the C code, but in all cases it still works correctly. (Moreover the performance hit is well below 2x, more like 20%.) I did not try to hack CPython so far, but only a custom interpreter for a Lisp language, whose implementation should be immediately familiar to anyone who knows CPython C code: https://bitbucket.org/arigo/duhton . The non-standard built-in function is "transaction", which schedules a transaction to run later. The code contains the necessary tweaks to reference counting, and seems to work on all examples, but leaks some of the objects so far. Fixing this directly might be possible, but I'm not sure yet (it might require interaction with the cycle-detecting GC of CPython). If anyone is interested, I could create a different mailing list in order to discuss this in more details, as it's only related to PyPy on the original idea level. From experience I would think that this has the potential to become a Psyco-like experiment, but unlike 10 years ago, today I'm not ready any more to dive completely alone into a project of that scale :-) A bient?t, Armin. From arigo at tunes.org Wed Apr 11 14:29:25 2012 From: arigo at tunes.org (Armin Rigo) Date: Wed, 11 Apr 2012 14:29:25 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> Message-ID: Hi Andrew, On Sun, Apr 8, 2012 at 01:25, Andrew Francis wrote: > Question: without specific transaction_start() and transaction_commit() > calls, how does rstm know what the start and finish of transactions are? Please take a different point of view: if the proper adaptation is done, the users of the "stackless" module don't need any change. They will continue to work as they are, with the same semantic as today --- with the exception of the ordering among tasklets, which will become truly random. This can be achieved by hacking at a different level. A bient?t, Armin. From anto.cuni at gmail.com Wed Apr 11 14:55:30 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 11 Apr 2012 14:55:30 +0200 Subject: [pypy-dev] [pypy-commit] pypy win32-stdlib: fix close() and tests for file closing In-Reply-To: <20120411112428.31DCC82F4E@wyvern.cs.uni-duesseldorf.de> References: <20120411112428.31DCC82F4E@wyvern.cs.uni-duesseldorf.de> Message-ID: <4F857F42.2090108@gmail.com> Hi Matti, On 04/11/2012 01:24 PM, mattip wrote: > Author: Matti Picus > Branch: win32-stdlib > Changeset: r54283:2a8e4f56269f > Date: 2012-04-11 13:26 +0300 > http://bitbucket.org/pypy/pypy/changeset/2a8e4f56269f/ > > Log: fix close() and tests for file closing > > diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py > deleted file mode 100644 the idea for 2.7 vs modified-2.7 is to keep 2.7 intact, so you should not remove the file from there. What we usually do is: $ hg cp 2.7/foo.py modified-2.7/ $ hg ci -m "make a copy" $ emacs modified-2.7/foo.py ... ciao, Anto From max.lavrenov at gmail.com Wed Apr 11 16:42:51 2012 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Wed, 11 Apr 2012 18:42:51 +0400 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> Message-ID: Thank you! Finally I've built libpupy.so and uwsgi with it. On Wed, Apr 11, 2012 at 11:22, Amaury Forgeot d'Arc wrote: > Hi, > > 2012/4/11 Max Lavrenov > >> Hello everyone! >> >> I got some errors while i was building the embedded-pypy branch with >> python translate.py -Ojit --shared. >> Could anybody help me with it, please? >> > > trackgcroot.py does not recognize some constructs used when compiling with > -fPIC. > I thought I fixed them though... > Anyway it will crash at runtime: because of code relocation, function > pointers are actually > addresses into a translation table, which contains the real code address. > I already fixed this for Windows a long time ago. > > Meanwhile, the best thing to do is to avoid assembler magic, and translate > with the option: > --gcrootfinder=shadowstack > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leo at cogsci.ucsd.edu Wed Apr 11 20:56:56 2012 From: leo at cogsci.ucsd.edu (Leo Trottier) Date: Wed, 11 Apr 2012 11:56:56 -0700 Subject: [pypy-dev] PyPy as part of a larger, bundled project? In-Reply-To: References: Message-ID: Actually, my motivation was not to get Calibre to be faster -- I use it only occasionally. All I knew was that Calibre was an application (1) built on Python, that (2) the Python interpreter it used was baked-in to the distribution, and (3) it seemed to perform a number of operations somewhat slowly. It seems that whenever (1) and (2) hold, there is a potential opportunity for the wide-scale deployment of PyPy, taking it from being used on a handful of servers and enthusiasts computers to instead being deployed on thousands or 10s of thousands of end-user applications. Perhaps PyPy *might* not immediately lead to an increase in performance (though one suspects that in general, it would), but the mere fact that it's available to the application developer could inspire new development paradigms that take advantage of PyPy's features. And it could serve as a practical test-bed for deploying PyPy and for evaluating tweaks to it. Leo On Wed, Apr 11, 2012 at 12:24 AM, Stefan Behnel wrote: > Leo Trottier, 11.04.2012 02:23: > > A number of Python applications (e.g. http://calibre-ebook.com/, > > http://www.psychopy.org/ ... > > http://en.wikipedia.org/wiki/List_of_Python_software#Applications) are > > deployed together with the libraries and interpreter that they will use. > > > > Often, these applications are larger, and can end up performing > operations > > that are computationally intensive. In the case of Calibre, e.g., large > > batch conversions from one book format to another can take more than an > > hour (for sufficiently large batches). > > Are you sure the bottleneck is in Python code here? PyPy won't magically > speed up image conversions for you, for example. You can expect it to be > faster for HTML processing with its bundled html5lib, though, and maybe > also PDF generation, which it seems to be using pyPDF for. However, for XML > processing, which I would expect to be a substantial part of the work when > converting between e-book formats, it appears to be using lxml - you can't > beat that with PyPy. > > Calibre likely won't run in PyPy directly as the GUI uses PyQT4 and it also > uses extension modules for plugins. So I'm rather confident that it will > not be easy to make it work at all with PyPy, or even to make any of the > more interesting conversion pipelines work entirely in PyPy. > > You can still give it a try, though. Maybe you can manage to get at least > an HTML-to-PDF pipeline working by forking off an external PyPy process and > porting the libraries. > > However, you seem to be more interested in making it run fast than in > making it run in PyPy. Your time may better be invested into pushing more > parallel processing into the right places. You mentioned batch processing, > that sounds like the bulk of the workload is trivially parallelisable. And > maybe a bit of profiling against your specific processing needs would hint > at a specific bottleneck that's easy to fix? > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From minhee at dahlia.kr Thu Apr 12 05:31:41 2012 From: minhee at dahlia.kr (Hong Minhee) Date: Thu, 12 Apr 2012 12:31:41 +0900 Subject: [pypy-dev] __exit__ not called in certain cases Message-ID: I reported https://bugs.pypy.org/issue1126 which shows that __exit__ is not called in certain cases when I think it should be. The code to reproduce the bug is there. The code works on CPython. It was closed as invalid saying this is a GC issue, but I don't think it is. This is about __exit__, not __del__. __exit__ should be always called if __enter__ succeeds. From benjamin at python.org Thu Apr 12 05:33:08 2012 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 11 Apr 2012 23:33:08 -0400 Subject: [pypy-dev] __exit__ not called in certain cases In-Reply-To: References: Message-ID: 2012/4/11 Hong Minhee : > I reported https://bugs.pypy.org/issue1126 which shows that __exit__ > is not called in certain cases when I think it should be. ?The code to > reproduce the bug is there. The code works on CPython. > > It was closed as invalid saying this is a GC issue, but I don't think > it is. ?This is about __exit__, not __del__. ?__exit__ should be > always called if __enter__ succeeds. It is about GC, because the GC is responsible for calling __exit__ here. -- Regards, Benjamin From stefan_ml at behnel.de Thu Apr 12 07:50:12 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 12 Apr 2012 07:50:12 +0200 Subject: [pypy-dev] PyPy as part of a larger, bundled project? In-Reply-To: References: Message-ID: Leo Trottier, 11.04.2012 20:56: > Actually, my motivation was not to get Calibre to be faster -- I use it > only occasionally. All I knew was that Calibre was an application (1) built > on Python, that (2) the Python interpreter it used was baked-in to the > distribution, and (3) it seemed to perform a number of operations somewhat > slowly. > > It seems that whenever (1) and (2) hold, there is a potential opportunity > for the wide-scale deployment of PyPy, taking it from being used on a > handful of servers and enthusiasts computers to instead being deployed on > thousands or 10s of thousands of end-user applications. > > Perhaps PyPy *might* not immediately lead to an increase in performance > (though one suspects that in general, it would), but the mere fact that > it's available to the application developer could inspire new development > paradigms that take advantage of PyPy's features. And it could serve as a > practical test-bed for deploying PyPy and for evaluating tweaks to it. Ah, ok. Then your question is somewhat backwards, though. You should start by looking for an application that matches the above properties *and* that runs well in PyPy or is at least not too difficult to port. Otherwise, this discussion will stay at a rather theoretical level and the answer is "yes, sure, whenever you find an application for which it works, it will work for that application". You could start by looking through PyPy's compatibility list to see if any of the names looks familiar and suitable. When users report problems that are listed there, that's usually because they have an interest in making something run in PyPy. Stefan From leo at cogsci.ucsd.edu Thu Apr 12 09:08:19 2012 From: leo at cogsci.ucsd.edu (Leo Trottier) Date: Thu, 12 Apr 2012 00:08:19 -0700 Subject: [pypy-dev] PyPy as part of a larger, bundled project? In-Reply-To: References: Message-ID: My hope is that someone who is already familiar with the PyPy build process and various compatibility quirks might be able to both quickly determine whether there would be build compatibility as well as, perhaps, succeeding in actually building it. When I fail to build something against PyPy, it's less obvious to me than to many of the people here whether the failure can be easily resolved by the use of cpyext or some other sophisticated, PyPy-specicific hack. My hope and suspicion, here, is that this kind of task is one that benefits significantly from experience with the subtleties of PyPy interoperability, rather than mere cleverness or a more general familiarity with software development. I.e., that this is a challenge that might be on the one hand quite straightforward (if a little tedious) to some, while perhaps nearly impossible to many others. This just seemed like potential "low-hanging fruit" -- minimal work that might lead to greatly expanded deployment of PyPy. Leo On Wed, Apr 11, 2012 at 10:50 PM, Stefan Behnel wrote: > Leo Trottier, 11.04.2012 20:56: > > Actually, my motivation was not to get Calibre to be faster -- I use it > > only occasionally. All I knew was that Calibre was an application (1) > built > > on Python, that (2) the Python interpreter it used was baked-in to the > > distribution, and (3) it seemed to perform a number of operations > somewhat > > slowly. > > > > It seems that whenever (1) and (2) hold, there is a potential opportunity > > for the wide-scale deployment of PyPy, taking it from being used on a > > handful of servers and enthusiasts computers to instead being deployed on > > thousands or 10s of thousands of end-user applications. > > > > Perhaps PyPy *might* not immediately lead to an increase in performance > > (though one suspects that in general, it would), but the mere fact that > > it's available to the application developer could inspire new development > > paradigms that take advantage of PyPy's features. And it could serve as > a > > practical test-bed for deploying PyPy and for evaluating tweaks to it. > > Ah, ok. Then your question is somewhat backwards, though. You should start > by looking for an application that matches the above properties *and* that > runs well in PyPy or is at least not too difficult to port. Otherwise, this > discussion will stay at a rather theoretical level and the answer is "yes, > sure, whenever you find an application for which it works, it will work for > that application". > > You could start by looking through PyPy's compatibility list to see if any > of the names looks familiar and suitable. When users report problems that > are listed there, that's usually because they have an interest in making > something run in PyPy. > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Thu Apr 12 09:12:39 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 12 Apr 2012 09:12:39 +0200 Subject: [pypy-dev] PyPy as part of a larger, bundled project? In-Reply-To: References: Message-ID: Leo Trottier, 12.04.2012 09:08: > My hope is that someone who is already familiar with the PyPy build process > and various compatibility quirks might be able to both quickly determine > whether there would be build compatibility as well as, perhaps, succeeding > in actually building it. When I fail to build something against PyPy, it's > less obvious to me than to many of the people here whether the failure can > be easily resolved by the use of cpyext or some other sophisticated, > PyPy-specicific hack. > > My hope and suspicion, here, is that this kind of task is one that benefits > significantly from experience with the subtleties of PyPy interoperability, > rather than mere cleverness or a more general familiarity with software > development. I.e., that this is a challenge that might be on the one hand > quite straightforward (if a little tedious) to some, while perhaps nearly > impossible to many others. Seems to answer the question why it's not more commonly done. Stefan From amauryfa at gmail.com Thu Apr 12 09:46:38 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 12 Apr 2012 09:46:38 +0200 Subject: [pypy-dev] PyPy as part of a larger, bundled project? In-Reply-To: References: Message-ID: 2012/4/12 Stefan Behnel > > My hope and suspicion, here, is that this kind of task is one that > benefits > > significantly from experience with the subtleties of PyPy > interoperability, > > rather than mere cleverness or a more general familiarity with software > > development. I.e., that this is a challenge that might be on the one hand > > quite straightforward (if a little tedious) to some, while perhaps nearly > > impossible to many others. > > Seems to answer the question why it's not more commonly done. I don't know the application you are referring to, and don't have the time to do it myself, but I definitely want to help anyone who would like take this route. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Thu Apr 12 12:52:44 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 12 Apr 2012 12:52:44 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> Message-ID: On Wed, Apr 11, 2012 at 4:42 PM, Max Lavrenov wrote: > > Thank you! Finally I've built libpupy.so and uwsgi with it. > Cool! Feel free to share your story with either us or via some blog if you find it interesting! Cheers, fijal > > On Wed, Apr 11, 2012 at 11:22, Amaury Forgeot d'Arc wrote: > >> Hi, >> >> 2012/4/11 Max Lavrenov >> >>> Hello everyone! >>> >>> I got some errors while i was building the embedded-pypy branch with >>> python translate.py -Ojit --shared. >>> Could anybody help me with it, please? >>> >> >> trackgcroot.py does not recognize some constructs used when compiling >> with -fPIC. >> I thought I fixed them though... >> Anyway it will crash at runtime: because of code relocation, function >> pointers are actually >> addresses into a translation table, which contains the real code address. >> I already fixed this for Windows a long time ago. >> >> Meanwhile, the best thing to do is to avoid assembler magic, and >> translate with the option: >> --gcrootfinder=shadowstack >> >> -- >> Amaury Forgeot d'Arc >> > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lavrenov at gmail.com Thu Apr 12 14:18:13 2012 From: max.lavrenov at gmail.com (Max Lavrenov) Date: Thu, 12 Apr 2012 16:18:13 +0400 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <4e1217cd245fb49f9bb2ade2085b6da1.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> Message-ID: I am afraid it's not interesting. I just wanted to test it with our small project - geoip server which resolves ip address to region. Now we use bjoern wsgi server for running this project. Basically it's does nothing but bisect search on array.array. When i run this code without wsgi wrapper using pypy it shows about 400% speed boots. Unfortunately uwsgi with pypy plugin shows me pretty same perfomance as usual uwsgi. When i wrote simple application which loop over "for" cycle and increment some variable, i got same result. I'll try running more tests to figure out what I'm doing wrong. I am a fan of your work and looking forward to using pypy in our production project. On Thu, Apr 12, 2012 at 14:52, Maciej Fijalkowski wrote: > On Wed, Apr 11, 2012 at 4:42 PM, Max Lavrenov wrote: > >> >> Thank you! Finally I've built libpupy.so and uwsgi with it. >> > > Cool! Feel free to share your story with either us or via some blog if you > find it interesting! > > Cheers, > fijal > > >> >> On Wed, Apr 11, 2012 at 11:22, Amaury Forgeot d'Arc wrote: >> >>> Hi, >>> >>> 2012/4/11 Max Lavrenov >>> >>>> Hello everyone! >>>> >>>> I got some errors while i was building the embedded-pypy branch with >>>> python translate.py -Ojit --shared. >>>> Could anybody help me with it, please? >>>> >>> >>> trackgcroot.py does not recognize some constructs used when compiling >>> with -fPIC. >>> I thought I fixed them though... >>> Anyway it will crash at runtime: because of code relocation, function >>> pointers are actually >>> addresses into a translation table, which contains the real code address. >>> I already fixed this for Windows a long time ago. >>> >>> Meanwhile, the best thing to do is to avoid assembler magic, and >>> translate with the option: >>> --gcrootfinder=shadowstack >>> >>> -- >>> Amaury Forgeot d'Arc >>> >> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bokr at oz.net Fri Apr 13 02:19:58 2012 From: bokr at oz.net (Bengt Richter) Date: Thu, 12 Apr 2012 17:19:58 -0700 Subject: [pypy-dev] STM/AME for CPython? In-Reply-To: References: Message-ID: <4F87712E.8040907@oz.net> Hi Armin, On 04/11/2012 04:28 AM Armin Rigo wrote: > If anyone is interested, I could create a different mailing list in > order to discuss this in more details, as it's only related to PyPy on > the original idea level. From experience I would think that this has > the potential to become a Psyco-like experiment, but unlike 10 years > ago, today I'm not ready any more to dive completely alone into a > project of that scale :-) I have too many irons (for me ;-) in the fire to participate much beyond occasional delurking, but I thought maybe a mailing list for your STM work could possibly serve also as preemptive defense against IP trolls, by establishing prior art and publication of ideas from yourself and others thinking about STM. Could you say something preemptive re STM for concurrent update of shared direct rendering video memory? What issues re separation of metadata vs "value" representation? Or re heterogenous processor mixes operating on shared data? Or re security/permissions/capabilities? Ditto for scaling the STM concept for distributed update of blobs in the cloud (with presumable implications for the smart phone environment, maybe crowd-sourced updates to crowd-viewed cloud representation of something, whether emergency info or flashmob partying status, etc.). Regards, Bengt Richter From stefan_ml at behnel.de Fri Apr 13 07:49:40 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 13 Apr 2012 07:49:40 +0200 Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables Message-ID: Hi, I already mentioned this possibility a couple of times on this list, but now the idea has attracted some more general interest. The Cython project would like to specify a general way of unpacking wrapped native functions for Python implementations. This is interesting for CPython because it would allow NumPy and other extensions to unpack functions implemented by other extensions before calling them in loops. PyPy could let its JIT do the same at runtime. Cython and low-level wrapper generators would be the obvious way to implement this, but it would also provide a generic way for Cython to unpack a function pointer from wrapped functions exported by other modules, at least if the native signature is known (or likely to be known) at compile time. Dag Seljebotn has started writing a CEP (Cython Enhancement Proposal) about it and opened the discussion on the Cython core developers mailing list. http://wiki.cython.org/enhancements/cep1000 http://thread.gmane.org/gmane.comp.python.cython.devel/13416/focus=13417 http://mail.python.org/mailman/listinfo/cython-devel Additionally, there is PEP 362 which aims to provide a Signature object for Python functions. I think we should build native signatures on top of that. http://www.python.org/dev/peps/pep-0362/ Note that a wrapper function may offer more than one native signature, e.g. when wrapping overloaded C++ functions or when using Cython fused functions. We may eventually end up with a PEP instead of a CEP for this, but I think it's still some way before we get there. Please participate in the discussion if you are interested. Stefan From roberto at unbit.it Fri Apr 13 08:15:37 2012 From: roberto at unbit.it (Roberto De Ioris) Date: Fri, 13 Apr 2012 08:15:37 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> Message-ID: <6d3762ff2aca989af19aab5acff17101.squirrel@manage.unbit.it> > I am afraid it's not interesting. > I just wanted to test it with our small project - geoip server which > resolves ip address to region. Now we use bjoern wsgi server for running > this project. Basically it's does nothing but bisect search on > array.array. > When i run this code without wsgi wrapper using pypy it shows about 400% > speed boots. > Unfortunately uwsgi with pypy plugin shows me pretty same perfomance as > usual uwsgi. When i wrote simple application which loop over "for" cycle > and increment some variable, i got same result. > I'll try running more tests to figure out what I'm doing wrong. I am a > fan > of your work and looking forward to using pypy in our production project. > > Hi, when you compile libpypy, the translator should generate a pypy-c binary. Try to run your code without the wsgi wrapper with this new binary, to check if you still have a 400 times improvement. If you get 'slower' values over a standard binary release of pypy, it means you have compiled it wrongly (maybe without the jit). In my tests, 99% of the webapps (being IO-based) do not gain too much power on pypy, but if you have a CPU-bound webapp (like you), you will end screaming WOOOW all of the time :) And take in account that in uWSGI we suggest using lua-jit for all of the need-to-be-fast parts, so to make us scream you have to be faster than it :) I would like to suggest you another approach, vastly pushed in the past years in pypy talks. It includes a bit of overhead, but you will end with a more solid platform: delegate the CPU-bound part to a pypy daemon (listening on unix sockets or whatever you want) and run your public webapp with your server of choice running on cpython. In your webapp you simply 'enqueue' tasks to the pypy daemon and wait for the result. Do not make the error to use a complex protocol for it. Go line-based. No need to add more overhead i often see. If for some reason the pypy daemon crashes (come on it happens ;), you can simply automatically restart it without propagating the down to your public web-services (if it crashes during a request you can simply re-enqueue it in the same transaction) With this approach you can continue using bjoern for the public IO-bound part, and abuse pypy for the cpu-heavy one. In uWSGI this kind of approach is a lot easier (from a sysadmin point-of-view) as you can use the --attach-daemon option, allowing you to 'attach' an external process (your pypy daemon) that will be monitored (and respawned) automatically. -- Roberto De Ioris http://unbit.it From fijall at gmail.com Fri Apr 13 10:23:10 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 13 Apr 2012 10:23:10 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: <6d3762ff2aca989af19aab5acff17101.squirrel@manage.unbit.it> References: <0255399bee51bd1c3df0a091a91f8cf3.squirrel@manage.unbit.it> <93c30f14e987d21708add7a61fad0ca4.squirrel@manage.unbit.it> <3a1eb74b7391d714aa93b8b4c5f6f7cf.squirrel@manage.unbit.it> <29bcf1e65852114717cc283402b654a6.squirrel@manage.unbit.it> <0a196a56fe09d861aa936049b865d908.squirrel@manage.unbit.it> <2b7fbb99ef9d417aa8098420959a3ceb.squirrel@manage.unbit.it> <6d3762ff2aca989af19aab5acff17101.squirrel@manage.unbit.it> Message-ID: On Fri, Apr 13, 2012 at 8:15 AM, Roberto De Ioris wrote: > > > I am afraid it's not interesting. > > I just wanted to test it with our small project - geoip server which > > resolves ip address to region. Now we use bjoern wsgi server for > running > > this project. Basically it's does nothing but bisect search on > > array.array. > > When i run this code without wsgi wrapper using pypy it shows about 400% > > speed boots. > > Unfortunately uwsgi with pypy plugin shows me pretty same perfomance as > > usual uwsgi. When i wrote simple application which loop over "for" cycle > > and increment some variable, i got same result. > > I'll try running more tests to figure out what I'm doing wrong. I am a > > fan > > of your work and looking forward to using pypy in our production project. > > > > > > Hi, > > when you compile libpypy, the translator should generate a pypy-c binary. > > Try to run your code without the wsgi wrapper with this new binary, to > check if you still have a 400 times improvement. > I think he said 400%, that's not as good ;-) > > If you get 'slower' values over a standard binary release of pypy, it > means you have compiled it wrongly (maybe without the jit). > > In my tests, 99% of the webapps (being IO-based) do not gain too much > power on pypy, but if you have a CPU-bound webapp (like you), you will end > screaming WOOOW all of the time :) > working on it... > > And take in account that in uWSGI we suggest using lua-jit for all of the > need-to-be-fast parts, so to make us scream you have to be faster than it > :) > FYI we outperform luajit on richards (giving *ample* warmup, luajit warms up *so* fast), did not measure on anything else. Just sayin > > I would like to suggest you another approach, vastly pushed in the past > years in pypy talks. It includes a bit of overhead, but you will end with > a more solid platform: > > delegate the CPU-bound part to a pypy daemon (listening on unix sockets or > whatever you want) and run your public webapp with your server of choice > running on cpython. In your webapp you simply 'enqueue' tasks to the pypy > daemon and wait for the result. Do not make the error to use a complex > protocol for it. Go line-based. No need to add more overhead i often see. > > If for some reason the pypy daemon crashes (come on it happens ;), you can > simply automatically restart it without propagating the down to your > public web-services (if it crashes during a request you can simply > re-enqueue it in the same transaction) > I think crashes are less of an issue (pypy is relatively stable) compared to the libraries that you have to interface with (like lxml). > > With this approach you can continue using bjoern for the public IO-bound > part, and abuse pypy for the cpu-heavy one. > > In uWSGI this kind of approach is a lot easier (from a sysadmin > point-of-view) as you can use the --attach-daemon option, allowing you to > 'attach' an external process (your pypy daemon) that will be monitored > (and respawned) automatically. > > > -- > Roberto De Ioris > http://unbit.it > Thanks for insights Roberto! I'm really glad uWSGI community is interested *and* seem to be taking a reasonable, non-overhyped approach Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Apr 13 15:11:51 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 13 Apr 2012 15:11:51 +0200 Subject: [pypy-dev] STM/AME for CPython? In-Reply-To: <4F87712E.8040907@oz.net> References: <4F87712E.8040907@oz.net> Message-ID: Hi Bengt, Actually I think we need foremost a new name. This is not about doing great things in any of the domains you list. It has no obvious connexion to any separation of metadata, security, capability, distributed update, mobile, or any other specific environment. This is merely about adding a new one-liner API usable by programs in order to behave as if they were run serially, but actually run on multiple cores. So even using the name "STM" is rather wrong and confusing: STM is only one possible implementation. This is described explicitly by the OCM project: http://ocm.dreamhosters.com/ . The goal I am pursuing is slightly different than OCM so using that name is not right either, but at least it is at a similar level. On a different note, I found an implemention for C/C++ programs using all the tricks of paging under Linux: http://plasma.cs.umass.edu/emery/grace . Does not work at all on CPython, where reference counting is enough to kill it: almost no page will be read-only. I'm writing down these papers in a file in the duhton repository. It would be cool if someone reviews them and comes up with a great general name :-) A bient?t, Armin. From wlavrijsen at lbl.gov Fri Apr 13 19:03:57 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 13 Apr 2012 10:03:57 -0700 (PDT) Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: Hi Stefan, > Note that a wrapper function may offer more than one native signature, e.g. > when wrapping overloaded C++ functions or when using Cython fused functions. the big problem I'm finding for unpacking C++ functions (other than that there's no platform-independent way to do so when it comes to methods, AFAIK anyway), is handling of C++ exceptions. Integrating unpacked functions pointers with PyPy works rather elegantly through the functionality made available in rlib/libffi.py. The form of the specification matters little at that point, as long as it is complete. For cppyy, the current plan is to wrap python functions in generated C++ functions for callbacks. But, reading the proposal, that's only half the story. I'd love to see the other side materialize as well. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From stefan_ml at behnel.de Fri Apr 13 20:18:11 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 13 Apr 2012 20:18:11 +0200 Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: wlavrijsen at lbl.gov, 13.04.2012 19:03: >> Note that a wrapper function may offer more than one native signature, e.g. >> when wrapping overloaded C++ functions or when using Cython fused functions. > > the big problem I'm finding for unpacking C++ functions (other than that > there's no platform-independent way to do so when it comes to methods, AFAIK > anyway), is handling of C++ exceptions. At least for Cython code that's not a problem - its functions raise Python exceptions. Some C++ exceptions are mapped automatically and others can be mapped explicitly. Other wrapper generators could also generate an intermediate wrapper function that does the error mapping and passes on Python exceptions. Unpacking a wrapped function doesn't necessarily mean that you get a bare C or C++ function. The main intention is to reduce the call overhead, which is mainly introduced by packing and unpacking arguments. That's why we want to expose the signature. > Integrating unpacked functions pointers with PyPy works rather elegantly > through the functionality made available in rlib/libffi.py. The form of the > specification matters little at that point, as long as it is complete. Hmm, but that's RPython, isn't it? I thought that was compiled statically? How would it adapt to a signature that it finds at runtime then? I think this is closer to ctypes, except that you don't have to specify the signature of the thing you call because PyPy will see it at call time. > For cppyy, the current plan is to wrap python functions in generated C++ > functions for callbacks. Yes, that would be the other direction. Stefan From alex.gaynor at gmail.com Fri Apr 13 20:25:29 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 13 Apr 2012 14:25:29 -0400 Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: On Fri, Apr 13, 2012 at 2:18 PM, Stefan Behnel wrote: > wlavrijsen at lbl.gov, 13.04.2012 19:03: > >> Note that a wrapper function may offer more than one native signature, > e.g. > >> when wrapping overloaded C++ functions or when using Cython fused > functions. > > > > the big problem I'm finding for unpacking C++ functions (other than that > > there's no platform-independent way to do so when it comes to methods, > AFAIK > > anyway), is handling of C++ exceptions. > > At least for Cython code that's not a problem - its functions raise Python > exceptions. Some C++ exceptions are mapped automatically and others can be > mapped explicitly. > > Other wrapper generators could also generate an intermediate wrapper > function that does the error mapping and passes on Python exceptions. > Unpacking a wrapped function doesn't necessarily mean that you get a bare C > or C++ function. The main intention is to reduce the call overhead, which > is mainly introduced by packing and unpacking arguments. That's why we want > to expose the signature. > > > > Integrating unpacked functions pointers with PyPy works rather elegantly > > through the functionality made available in rlib/libffi.py. The form of > the > > specification matters little at that point, as long as it is complete. > > Hmm, but that's RPython, isn't it? I thought that was compiled statically? > How would it adapt to a signature that it finds at runtime then? > > I think this is closer to ctypes, except that you don't have to specify the > signature of the thing you call because PyPy will see it at call time. > > > > For cppyy, the current plan is to wrap python functions in generated C++ > > functions for callbacks. > > Yes, that would be the other direction. > > Stefan > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > rlib/libffi.py is for runtime stuff, it's the basis of both ctypes and the C++ wrapper. You may be thinking of rffi.py, which is compile time. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Fri Apr 13 20:57:32 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 13 Apr 2012 20:57:32 +0200 Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: Alex Gaynor, 13.04.2012 20:25: > On Fri, Apr 13, 2012 at 2:18 PM, Stefan Behnel wrote: >> wlavrijsen at lbl.gov, 13.04.2012 19:03: >>> Integrating unpacked functions pointers with PyPy works rather elegantly >>> through the functionality made available in rlib/libffi.py. The form of >>> the specification matters little at that point, as long as it is >>> complete. >> >> Hmm, but that's RPython, isn't it? I thought that was compiled statically? >> How would it adapt to a signature that it finds at runtime then? >> >> I think this is closer to ctypes, except that you don't have to specify the >> signature of the thing you call because PyPy will see it at call time. > > rlib/libffi.py is for runtime stuff, it's the basis of both ctypes and the > C++ wrapper. You may be thinking of rffi.py, which is compile time. Ah, cool. Good to know. Then it shouldn't be much work for PyPy to support this. Stefan From wlavrijsen at lbl.gov Fri Apr 13 21:15:20 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 13 Apr 2012 12:15:20 -0700 (PDT) Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: Hi Stefan, > At least for Cython code that's not a problem - its functions raise Python > exceptions. Some C++ exceptions are mapped automatically and others can be > mapped explicitly. > > Other wrapper generators could also generate an intermediate wrapper > function that does the error mapping and passes on Python exceptions. ah, that's what I referred to as "a problem" as it still leaves a layer, and hence a slowdown. :) Yes, in the "slow" path it works like that. I'm just hoping for something more elegant. > Hmm, but that's RPython, isn't it? I thought that was compiled statically? > How would it adapt to a signature that it finds at runtime then? It is RPython, but compiled statically does not mean that it can not have behaviors at runtime: you specify at runtime the kind of low-level objects to expect, then map those objects at the time of the call. Like you say, just as with ctypes (which in PyPy has libffi underneath). The relevant classes are libffi.Func which receives the annotations from a selection of libffi.types, and libffi.ArgChain which receives the values just before the call. The return type is handed to Func.call, and then only needs boxing to be send back to python. In C++, there's for member functions a little bit of gymnastics going on as a naked function pointer can only be obtained after binding it to an object which again is slow (relatively, anyway). However, if the type is known, a single lookup suffices, and then the JIT can guard on that type. The 'this' pointer then becomes the first arg in the libffi.ArgChain and all is good. For gcc anyway, which has this behavior documented as an extension, so it can presumably be relied upon. :) >> For cppyy, the current plan is to wrap python functions in generated C++ >> functions for callbacks. > > Yes, that would be the other direction. I'm more seeing it as only half the work, rather than a different direction. At least, as I understand Cython, the generated low level code is an actual identifiable function? That is not the case for JIT traces, and so where in Cython a function pointer can be given back to the C++ code performing the callback, I'm not aware of anything similar being available from the JIT. Of course, Python -> Cython -> C-pointer -> JIT should work nicely. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From stefan_ml at behnel.de Fri Apr 13 21:43:17 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 13 Apr 2012 21:43:17 +0200 Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: wlavrijsen at lbl.gov, 13.04.2012 21:15: >> At least for Cython code that's not a problem - its functions raise Python >> exceptions. Some C++ exceptions are mapped automatically and others can be >> mapped explicitly. >> >> Other wrapper generators could also generate an intermediate wrapper >> function that does the error mapping and passes on Python exceptions. > > ah, that's what I referred to as "a problem" as it still leaves a layer, > and hence a slowdown. :) Yes, in the "slow" path it works like that. I'm > just hoping for something more elegant. It's not necessarily slow because a) the intermediate function can do more than just passing through data (especially in the case of Cython or Numba) and b) the exception case is usually just that, an exceptional case. >> Hmm, but that's RPython, isn't it? I thought that was compiled statically? >> How would it adapt to a signature that it finds at runtime then? > > It is RPython, but compiled statically does not mean that it can not have > behaviors at runtime: you specify at runtime the kind of low-level objects > to expect, then map those objects at the time of the call. Like you say, > just as with ctypes (which in PyPy has libffi underneath). Ok, I just took a look at it and it seems like the right thing to use for this. Then all that's left is an efficient runtime mapping from the exported signature to a libffi call specification. > The relevant classes are libffi.Func which receives the annotations from > a selection of libffi.types, and libffi.ArgChain which receives the values > just before the call. The return type is handed to Func.call, and then > only needs boxing to be send back to python. > > In C++, there's for member functions a little bit of gymnastics going on as > a naked function pointer can only be obtained after binding it to an object > which again is slow (relatively, anyway). However, if the type is known, a > single lookup suffices, and then the JIT can guard on that type. The 'this' > pointer then becomes the first arg in the libffi.ArgChain and all is good. > For gcc anyway, which has this behavior documented as an extension, so it > can presumably be relied upon. :) Ok, then the advantage is that you don't have to know the exact signature in the calling code because it is documented in the called function. For example, you could pass arbitrary context arguments through to the function and the JIT would deal with the actual type mapping automatically. >>> For cppyy, the current plan is to wrap python functions in generated C++ >>> functions for callbacks. >> >> Yes, that would be the other direction. > > I'm more seeing it as only half the work, rather than a different direction. > At least, as I understand Cython, the generated low level code is an actual > identifiable function? It's a direct mapping from the functions in your source code. > That is not the case for JIT traces, and so where in > Cython a function pointer can be given back to the C++ code performing the > callback, I'm not aware of anything similar being available from the JIT. Hmm, yes, this is pretty trivial in Cython. You declare your function with the right C signature and that's it. Stefan From wlavrijsen at lbl.gov Fri Apr 13 22:19:14 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Fri, 13 Apr 2012 13:19:14 -0700 (PDT) Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: Hi Stefan, > It's not necessarily slow because a) the intermediate function can do more > than just passing through data (especially in the case of Cython or Numba) > and b) the exception case is usually just that, an exceptional case. interesting: under a), what other useful work can be done by the intermediate function? (Yes for b), but the slowness is in having an extra layered C++ call in between, the one that contains the try/catch. That's at least an extra 25% overhead over the naked function pointer at current levels. Of course, only in a micro benchmark. In real life, it's irrelevant.) > Ok, I just took a look at it and it seems like the right thing to use for > this. Then all that's left is an efficient runtime mapping from the > exported signature to a libffi call specification. It need not even be an efficient mapping: since the mapping is static for each function pointer, the JIT takes care of removing it (that is, it puts the results of the mapping inline, so the lookup code itself disappears). Same goes for C++ overloads (with a little care): each overload that fails should result in a (python) exception during mapping of the arguments. The JIT then removes those branches from the trace, leaving only the call that succeeded in the optimized trace. Thus, any time spent making the selection of the overload efficient is mostly wasted, as that code gets completely removed. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From bokr at oz.net Sat Apr 14 02:03:38 2012 From: bokr at oz.net (Bengt Richter) Date: Fri, 13 Apr 2012 17:03:38 -0700 Subject: [pypy-dev] STM/AME for CPython? In-Reply-To: References: <4F87712E.8040907@oz.net> Message-ID: <4F88BEDA.5010307@oz.net> On 04/13/2012 06:11 AM Armin Rigo wrote: > Hi Bengt, > > Actually I think we need foremost a new name. This is not about doing > great things in any of the domains you list. It has no obvious > connexion to any separation of metadata, security, capability, > distributed update, mobile, or any other specific environment. This is > merely about adding a new one-liner API usable by programs in order to > behave as if they were run serially, but actually run on multiple > cores. So even using the name "STM" is rather wrong and confusing: > STM is only one possible implementation. > > This is described explicitly by the OCM project: > http://ocm.dreamhosters.com/ . The goal I am pursuing is slightly > different than OCM so using that name is not right either, but at > least it is at a similar level. > > On a different note, I found an implemention for C/C++ programs using > all the tricks of paging under Linux: > http://plasma.cs.umass.edu/emery/grace . Does not work at all on > CPython, where reference counting is enough to kill it: almost no page > will be read-only. A thought experiment: what if you squandered pages by allocating a buddy-page for each page with refcount variables, and did your reference counting at the same offset in the buddy page, leaving the original untouched as far as the ref counting goes? And then don't write protect the buddy/refcount page, but do for the original, so you can use the page fault mechanism to detect data writes other than reference count updates. ... (sort of what I was getting at referring to issues of separation of data and metadata). Of course you could do better space-wise by putting a level of indirection in the ref counting, and point to refcount values allocated packed in the buddy/refcount pages (which wouldn't be "buddy" 1:1 any more). Similar considerations might apply to other metadata. Haven't thought about how garbage collection would be affected, but I gotta stop now ;-) > > I'm writing down these papers in a file in the duhton repository. It > would be cool if someone reviews them and comes up with a great > general name :-) > Much of the discussion reminds me of CPU pipelines with out-of-order and speculative execution and discarding of results from paths not taken when calculated conditions become available. So maybe a name like Speculative Parallel Software Execution Merging (SPSXM?) (please improve ;-) But if I understand you right, that's an implementation detail underneath your "one-liner API" anyway, and not your primary concern? (until you get to optimizing, IWG ;-) But do you really need a new one-liner for python? I.e, couldn't you use a decorator to transmogrify a function into just about anything, like to create atomic functions, optionally also asynchronous? (wouldn't surprise me if this exists already). But given duhton[1], it is maybe not python per se but rpython that you want to make a one-liner for? I obviously haven't read enough, sorry ;-/ (OTOH I may represent a few other in that). BTW, will your solution enable us to create atomic properties using descriptors and all that? Regards, Bengt Richter PS. Nit: I wasn't familiar with the acronym AME (Automatic Mutual Exclusion). Might be good to spell it out on first use in your docs, as it didn't come up in wikipedia ;-) [1] BTW, why not -'thon'? ;-) From mailing.ch at congrex.com Sat Apr 14 05:48:09 2012 From: mailing.ch at congrex.com (ENS Newsletter) Date: Sat, 14 Apr 2012 05:48:09 +0200 Subject: [pypy-dev] Communications of the European Neurological Society - April 2012 Message-ID: <082dbda3-7c1d-409e-bb8e-d2a22c044d89@CEN-SV-EXM-02.congrex.internal> An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Apr 14 09:36:00 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 14 Apr 2012 09:36:00 +0200 Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: wlavrijsen at lbl.gov, 13.04.2012 22:19: >> It's not necessarily slow because a) the intermediate function can do more >> than just passing through data (especially in the case of Cython or Numba) >> and b) the exception case is usually just that, an exceptional case. > > interesting: under a), what other useful work can be done by the intermediate > function? Cython is a programming language, so you can stick anything you like into the wrapper. Note that a lot of code is not being (re-)written specifically for a platform (CPython/PyPy/...), or at least shouldn't be, so when writing a wrapper as a library, you may want to put some (and sometimes a lot of) functionality into the wrapper itself. Be it to make a C-ish interface more comfortable or to provide a certain functionality on top of a bare C/C++ library. Also, Cython allows you to parallelise code quite easily based on OpenMP, another thing that is often done in wrappers for computational code. This discussion actually arose from the intention to interface Cython code efficiently with Numba, which uses the LLVM to generate code at runtime. For that, both sides need to be able to see the C level signatures of what they call in order to bypass the Python level call overhead. > (Yes for b), but the slowness is in having an extra layered C++ call in > between, the one that contains the try/catch. That's at least an extra 25% > overhead over the naked function pointer at current levels. Of course, only > in a micro benchmark. In real life, it's irrelevant.) IIRC, exceptions can be surprisingly expensive in C++, so I agree that it matters for very small functions. But you'd want to inline those anyway and avoid exceptions if at all possible. >> Ok, I just took a look at it and it seems like the right thing to use for >> this. Then all that's left is an efficient runtime mapping from the >> exported signature to a libffi call specification. > > It need not even be an efficient mapping: since the mapping is static for > each function pointer, the JIT takes care of removing it (that is, it puts > the results of the mapping inline, so the lookup code itself disappears). We're currently discussing ways to do this in Cython as well. The code wouldn't get removed but at least moved out of the way, so that the CPU's branch prediction can do the right thing. That gives you about the same performance in practice. > Same goes for C++ overloads (with a little care): each overload that fails > should result in a (python) exception during mapping of the arguments. The > JIT then removes those branches from the trace, leaving only the call that > succeeded in the optimized trace. Thus, any time spent making the selection > of the overload efficient is mostly wasted, as that code gets completely > removed. A static compiler would handle that similarly. Stefan From stefan_ml at behnel.de Sat Apr 14 09:47:54 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 14 Apr 2012 09:47:54 +0200 Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: Stefan Behnel, 14.04.2012 09:36: > wlavrijsen at lbl.gov, 13.04.2012 22:19: >> Same goes for C++ overloads (with a little care): each overload that fails >> should result in a (python) exception during mapping of the arguments. The >> JIT then removes those branches from the trace, leaving only the call that >> succeeded in the optimized trace. Thus, any time spent making the selection >> of the overload efficient is mostly wasted, as that code gets completely >> removed. > > A static compiler would handle that similarly. Ah, sorry, misread your paragraph. You were talking about alternative signatures. Yes, those require at least some setup overhead in static code. Should be possible to avoid that overhead in loops, though. We'll see how that works out. Stefan From stefan_ml at behnel.de Sat Apr 14 18:44:53 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 14 Apr 2012 18:44:53 +0200 Subject: [pypy-dev] [cpyext] partial fake PEP393 implementation to provide access to single unicode characters in strings Message-ID: Hi, PEP393 (the new Unicode type in Py3.3) defines a rather useful C interface towards the characters of a Unicode string. I think it would be cool if cpyext provided that, so that access to single characters won't require copying the unicode buffer into C space anymore. I attached an untested (and likely non-working) patch that adds the most important parts of it. The implementation does not care about non-BMP characters, which (if I'm not mistaken) are encoded as surrogate pairs in PyPy. Apart from that, the functions behave like their CPython counterparts, which means that the implementation shouldn't get in the way of a future real PEP393 implementation. What do you think? I have no idea if the way the index access is done in PyUnicode_READ_CHAR() is in any way efficient - would be good if it was. Specifically, the intention is to avoid creating a 1-character unicode string copy before taking its ord(). Does this happen automatically, or is there a way to make sure it does that? Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: fake_pep393.patch Type: text/x-patch Size: 3138 bytes Desc: not available URL: From wlavrijsen at lbl.gov Sun Apr 15 07:50:57 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Sat, 14 Apr 2012 22:50:57 -0700 (PDT) Subject: [pypy-dev] Cython-CEP: Native dispatch through Python callables In-Reply-To: References: Message-ID: Hi Stefan, > Cython is a programming language, so you can stick anything you like into > the wrapper. Note that a lot of code is not being (re-)written specifically > for a platform (CPython/PyPy/...), or at least shouldn't be, so when > writing a wrapper as a library, you may want to put some (and sometimes a > lot of) functionality into the wrapper itself. Be it to make a C-ish > interface more comfortable or to provide a certain functionality on top of > a bare C/C++ library. Also, Cython allows you to parallelise code quite > easily based on OpenMP, another thing that is often done in wrappers for > computational code. ah, so even the whole loop could be in the wrapper and thus generated. (In my case, the C++ code is always "as-is" since the C++ API to be bound is by definition pre-existing, and often not under control of the Python developer who wants to use it.) That brings me to something else: one thing that I'd love to know from the signature object, but this may not be relevant to the Cython use case since both ends are controlled, is what the ownership rules are for each of the arguments and the return type. For C, this may not be too relevant, other than for const char* returns, but for C++ it's rather important: unless an API follows a strict convention (e.g. all non-const pointers passed in get owned, everything const does not get modified, consistent naming, etc.), it's hard to do anything fully automatic, so patch-ups are frequently needed. > IIRC, exceptions can be surprisingly expensive in C++, so I agree that it > matters for very small functions. But you'd want to inline those anyway and > avoid exceptions if at all possible. Right, but any inlining in my case is done by the PyPy JIT, and it is blind the moment C++ gets entered. So, I'm a bit out of luck there: it'll take a lot more engineering to combine several consecutive C++ calls into one set, which as a whole is then put in a try/catch. However, what could be done if together with the python binding from cppyy, such a signature would be made available through a property (is a triviality, given that all reflection information is available), then any bound C++ function through cppyy (or PyCling on CPython for that matter) could be made part of the Cython generated code and be inlined after all. Sounds pretty cool to me. > Ah, sorry, misread your paragraph. You were talking about alternative > signatures. Yes, those require at least some setup overhead in static code. > Should be possible to avoid that overhead in loops, though. We'll see how > that works out. Depending on at which point the code kicks in, but if the input are still boxed types, then in my experience the fastest approach by far, is to hash their types and memoize the overload selection made on that hash. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From andrewfr_ice at yahoo.com Sun Apr 15 18:07:02 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Sun, 15 Apr 2012 09:07:02 -0700 (PDT) Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> Message-ID: <1334506022.63382.YahooMailNeo@web120706.mail.ne1.yahoo.com> Hi Armin: ________________________________ From: Armin Rigo To: Andrew Francis Cc: PyPy Developer Mailing List ; "stackless at stackless.com" Sent: Wednesday, April 11, 2012 8:29 AM Subject: Re: The Work Plan Re: [pypy-dev] STM proposal funding On Sun, Apr 8, 2012 at 01:25, Andrew Francis wrote: AF> Question: without specific transaction_start() and transaction_commit() AF> calls, how does rstm know what the start and finish of transactions are? >Please take a different point of view: if the proper adaptation is >done, the users of the "stackless" module don't need any change.? Yes. Suggestion: perhaps in a future position paper, you can state as a design principle that the programmer does not explicitly state a commit point? So returning to the example,? a programme more in the spirit of what you are doing is: def transferTasklet(ch) ????? def transfer(toAccount, fromAccount, amount): ?.????????? fromAccount.withdraw(amount) ??????????? toAccount.deposit(amount) ????? while someFlag: ?????????????? toAcount, fromAccount, amount = ch.receive() ?????????????? transaction.add(transfer, toAccount, fromAccount, amount) if __name__ == "__main__": ????? ch = stackless.channel() ????? task = stackless.tasklet(transferTasklet)(ch) ????? transaction.add(task) ???? .... ? ?? """ ??? ?let us assume that the stackless scheduler and transaction manager are somehow integrated ?? ? """ ??? ?stackless.run()???? and we assume that the underlying system "magically" takes of stuff. However the programmer does have to throw the transaction manager a bone (hence transaction.add()) >They will continue to work as they are, with the same semantic as today --- >with the exception of the ordering among tasklets, which will become >truly random.? Noted. I think?issues of ordering (serialisation)??is a consequence of a correctly implemented transaction manager. If I understand your strategy, the approach is to?give a Python developer a race free programme with minimum effort. However I think a major concern would be implementations that minimises conflicts/contention. As I stated in previous posts, I believe in the case of Stackless, the message passing system is natural way to give the application programmer more control in regard to minimising conflicts/contention (or whatever term the literature uses). >This can be achieved by hacking at a different level. I am interested in what this hacking looks like under the hood. Again, I am assuming as a design principle guideline (and to focus folk's efforts), one?should be providing the rstm module just enough information to work but make no assumptions about how it works. So?one?ought not depend on whether? strategies like?eager/lazy evaluation (right now, I think your system depends on lazy evaluation since I see stuff like?redo logs), are used. Still I am interested in stuff like: would rstm function correctly if??the underlying implementation tracked tasklets rather than threads? I am looking at your implementation and the AME.c/h file in Duhton. I want to get into a position to do some hacking. So I want to think about: What do we know about the relationship between transactions, tasklets and threads? Under what conditions in a Stackless application, would conflicts occur? What could be done by 1) The programmer 2) The STM implemenation to avoid this? Some side notes: I was reading about the Haskell implementation in the "Software Transactional Memory 2nd" book (chapter 4.6.2 - conditionl synchronization). It seems that Haskell has a user space "thread" library that is somewhat integrated with the STM. I am going to look at this more since this is something that Stackless could take advantage of. Salut, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewfr_ice at yahoo.com Mon Apr 16 19:43:18 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Mon, 16 Apr 2012 10:43:18 -0700 (PDT) Subject: [pypy-dev] STM/AME for CPython? In-Reply-To: References: <4F87712E.8040907@oz.net> Message-ID: <1334598198.432.YahooMailNeo@web120705.mail.ne1.yahoo.com> Hi Armin: ________________________________ From: Armin Rigo To: Bengt Richter Cc: pypy-dev at python.org Sent: Friday, April 13, 2012 9:11 AM Subject: Re: [pypy-dev] STM/AME for CPython? >Actually I think we need foremost a new name.? This is not about doing >great things in any of the domains you list.? It has no obvious >connexion to any separation of metadata, security, capability, >distributed update, mobile, or any other specific environment. This is >merely about adding a new one-liner API usable by programs in order to >behave as if they were run serially, but actually run on multiple >cores.? So even using the name "STM" is rather wrong and confusing: >STM is only one possible implementation. To me the goal? is to create a simple, non-evasive (an aspect being composability) clutter free form of concurrency. In short a Pythonic approach. A suggestion: Pythonic Concrrency Control : PCC or PyCC Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Apr 16 20:34:16 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 16 Apr 2012 20:34:16 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: <1334506022.63382.YahooMailNeo@web120706.mail.ne1.yahoo.com> References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> <1334506022.63382.YahooMailNeo@web120706.mail.ne1.yahoo.com> Message-ID: Hi Andrew, On Sun, Apr 15, 2012 at 18:07, Andrew Francis wrote: > So returning to the example,? a programme more in the spirit of what you are > doing is: No, you are still missing my point. The goal is to work on existing stackless programs, not to write custom programs that combine "stackless" and "transaction" explicitly. Yes, I'm saying that _any_ existing stackless-based program can be run on multiple cores (assuming common causes of conflicts are found and removed). It doesn't make much sense to go into details now because without an actual implementation we can't really know for sure what the common causes of conflicts will be --- not to mention, I don't like to talk endlessly just to try to convince people that it''s actually even possible :-) A bient?t, Armin. From kk1674 at nyu.edu Mon Apr 16 21:36:53 2012 From: kk1674 at nyu.edu (Kibeom Kim) Date: Mon, 16 Apr 2012 15:36:53 -0400 Subject: [pypy-dev] update (+patch) on embedding pypy Message-ID: >>> On Tue, Apr 3, 2012 at 11:32 AM, Roberto De Ioris >>> wrote: >>>> >>>> >>>>> >>>>> Ok I see. >>>>> >>>>> Is the rest of the API used going to be cpyext? If so, then >>>>> Py_Initialize is indeed a perfect choice. >>>>> >>>> >>>> >>>> I am about to add: >>>> >>>> Py_SetPythonHome >>>> Py_SetProgramName >>>> Py_Finalize >>>> >>>> i will put them into >>>> >>>> module/cpyext/src/pythonrun.c >>>> >>>> Do you think Py_Initialize should go there too ? >>>> >>>> -- >>>> Roberto De Ioris >>>> http://unbit.it >>> >>> Sounds like a good idea. Should I merge the pull request now or wait >>> for the others? >>> >>> >> >> I think it is better to wait. Moving that to cpyext will avoid messing >> with translators (adding more exported symbols) too. >> >> > >Ok, i am pretty satisfied with the current code (i have made a pull request). > >I have implemented: > >Py_Initialize >Py_Finalize >Py_SetPythonHome >Py_SetProgramName > >all as rpython-cpyext except for Py_Initialize being splitted in a c part >(it requires a call to RPython_StartupCode) > >Successfully tested with current uWSGI tip. Py_SetPythonHome add flawless >support for virtualenv. Great work! :) I have two questions: 1. Can other embedding c-api (e.g.?PyObject_CallObject) be supported in the similar manner? 2. Can we embed multiple independent pypy? (http://bytes.com/topic/python/answers/793370-multiple-independent-python-interpreters-c-c-program) -Kibeom Kim From amauryfa at gmail.com Mon Apr 16 23:03:16 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 16 Apr 2012 23:03:16 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: Message-ID: Hi, 2012/4/16 Kibeom Kim > 1. Can other embedding c-api (e.g. PyObject_CallObject) be supported > in the similar manner? > There are already 430 functions which are already supported... PyObject_CallObject was one of the easiest. The recent developments brought the ability to start a pypy interpreter from C code. > 2. Can we embed multiple independent pypy? > ( > http://bytes.com/topic/python/answers/793370-multiple-independent-python-interpreters-c-c-program > ) > Not very well, but this is also the case for CPython. Note that PyObject_CallObject for example has no context to specify which interpreter it would use; and there can only be one PyObject_CallObject function in a single executable. Why do you need independent interpreters? Would multiple threads suit your needs? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewfr_ice at yahoo.com Tue Apr 17 00:44:52 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Mon, 16 Apr 2012 15:44:52 -0700 (PDT) Subject: [pypy-dev] Question about stm_descriptor_init(), tasklets and OS threads Message-ID: <1334616292.8656.YahooMailNeo@web120705.mail.ne1.yahoo.com> Hi Armin: I am looking at stm_descriptor_init(). Right now makes a call to pthread_self(). In a potential Stackless prototype, I would want it to get the current tasklet instead. Shouldn't this be enough to get a trivial implementation of Stackless (by trivial, one thread. Hopefully by sticking to a single thread we don't have to alter any low level locking stuff) to interact with the transaction module. Enough to analyse programmes? Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From kk1674 at nyu.edu Tue Apr 17 01:46:35 2012 From: kk1674 at nyu.edu (Kibeom Kim) Date: Mon, 16 Apr 2012 19:46:35 -0400 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: Message-ID: On Mon, Apr 16, 2012 at 5:03 PM, Amaury Forgeot d'Arc wrote: > Hi, > > 2012/4/16 Kibeom Kim >> >> 1. Can other embedding c-api (e.g.?PyObject_CallObject) be supported >> in the similar manner? > > > There are already 430 functions which are already supported... > PyObject_CallObject was one of the easiest. > The recent developments brought the ability to start a pypy interpreter from > C code. > oh ok. >> >> 2. Can we embed multiple independent pypy? >> >> (http://bytes.com/topic/python/answers/793370-multiple-independent-python-interpreters-c-c-program) > > > Not very well, but this is also the case for CPython. > Note that PyObject_CallObject for example has no context to specify which > interpreter it would use; > and there can only be one PyObject_CallObject function in a single > executable. > > Why do you need independent interpreters? > Would multiple threads suit your needs? > > -- > Amaury Forgeot d'Arc It's not my needs, but I think that's better-designed interpreter lib. If pypy decides to support, I guess there should be two versions of api sets, one for CPython api compatibility, and the other one for multiple independent interpreter support. Py_Initialize(); Py_Initialize(PyPyInterpreter pypyinterpreter); PyObject* PyObject_CallObject(PyObject *callable_object, PyObject *args) PyObject* PyObject_CallObject(PyPyInterpreter pypyinterpreter, PyObject *callable_object, PyObject *args) But maybe no one needs it... I don't know.. -Kibeom Kim From bokr at oz.net Tue Apr 17 08:24:15 2012 From: bokr at oz.net (Bengt Richter) Date: Mon, 16 Apr 2012 23:24:15 -0700 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> <1334506022.63382.YahooMailNeo@web120706.mail.ne1.yahoo.com> Message-ID: <4F8D0C8F.3020205@oz.net> Hi Armin, On 04/16/2012 11:34 AM Armin Rigo wrote: > Hi Andrew, > > On Sun, Apr 15, 2012 at 18:07, Andrew Francis wrote: >> So returning to the example, a programme more in the spirit of what you are >> doing is: > > No, you are still missing my point. The goal is to work on existing > stackless programs, not to write custom programs that combine > "stackless" and "transaction" explicitly. > > Yes, I'm saying that _any_ existing stackless-based program can be run > on multiple cores (assuming common causes of conflicts are found and > removed). It doesn't make much sense to go into details now because > without an actual implementation we can't really know for sure what > the common causes of conflicts will be --- not to mention, I don't > like to talk endlessly just to try to convince people that it''s > actually even possible :-) > > > A bient?t, > > Armin. Do you want to turn an arbitrary number of cores into a virtual uniprocessor and then use a "yield" (a la OCM) as a kind of logical critical section marker for seamless coroutine-like transfer between one section (space between yields) and another such section in another process? IOW, everything between yield executions is logically guaranteed serial within and atomic as a whole? And everything becomes a speculatively executed critical section, with potential rollback and retry? For removing "common causes of conflicts", are you thinking of static analysis to separate yield-demarked thread sections into probability-of-contention categories for different strategies of scheduling execution by actual cores? Or maybe not just static analysis, but also jit-like tracing of contention and dynamically rewriting with mutex to serialize where conflict detection/rollback/retry thrashes? Does that seem possible? How do you avoid being overprotective? E.g., if x,y,z are being written, and x,y *has* to be atomically coherent, like coordinates, but z can be any version, should the programmer write yields tightly around coherent outputs, e.g., "... yield; x=foo(); y=bar(); yield; z=baz() ..." in order not to create a non-requirement that z also be synced with x,y? Should there not be some way of programming atomicity of noun sets as well as verb sequences? E.g., what do the new concepts do for supporting programming an atomic class concept that would inherit from some base class that guarantees atomic execution of methods and thus synced access to guaranteed coherent attributes? (Which probably exists with locks etc already, but wondering what the new concepts bring to it). Really curious what you have up your sleeve. OTOH, I understand that you probably have experiments and thoughts under way that you'd rather work on than discuss or explain, so back to lurking for some decent interval ;-) Regards, Bengt Richter From bokr at oz.net Tue Apr 17 08:24:15 2012 From: bokr at oz.net (Bengt Richter) Date: Mon, 16 Apr 2012 23:24:15 -0700 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> <1334506022.63382.YahooMailNeo@web120706.mail.ne1.yahoo.com> Message-ID: <4F8D0C8F.3020205@oz.net> Hi Armin, On 04/16/2012 11:34 AM Armin Rigo wrote: > Hi Andrew, > > On Sun, Apr 15, 2012 at 18:07, Andrew Francis wrote: >> So returning to the example, a programme more in the spirit of what you are >> doing is: > > No, you are still missing my point. The goal is to work on existing > stackless programs, not to write custom programs that combine > "stackless" and "transaction" explicitly. > > Yes, I'm saying that _any_ existing stackless-based program can be run > on multiple cores (assuming common causes of conflicts are found and > removed). It doesn't make much sense to go into details now because > without an actual implementation we can't really know for sure what > the common causes of conflicts will be --- not to mention, I don't > like to talk endlessly just to try to convince people that it''s > actually even possible :-) > > > A bient?t, > > Armin. Do you want to turn an arbitrary number of cores into a virtual uniprocessor and then use a "yield" (a la OCM) as a kind of logical critical section marker for seamless coroutine-like transfer between one section (space between yields) and another such section in another process? IOW, everything between yield executions is logically guaranteed serial within and atomic as a whole? And everything becomes a speculatively executed critical section, with potential rollback and retry? For removing "common causes of conflicts", are you thinking of static analysis to separate yield-demarked thread sections into probability-of-contention categories for different strategies of scheduling execution by actual cores? Or maybe not just static analysis, but also jit-like tracing of contention and dynamically rewriting with mutex to serialize where conflict detection/rollback/retry thrashes? Does that seem possible? How do you avoid being overprotective? E.g., if x,y,z are being written, and x,y *has* to be atomically coherent, like coordinates, but z can be any version, should the programmer write yields tightly around coherent outputs, e.g., "... yield; x=foo(); y=bar(); yield; z=baz() ..." in order not to create a non-requirement that z also be synced with x,y? Should there not be some way of programming atomicity of noun sets as well as verb sequences? E.g., what do the new concepts do for supporting programming an atomic class concept that would inherit from some base class that guarantees atomic execution of methods and thus synced access to guaranteed coherent attributes? (Which probably exists with locks etc already, but wondering what the new concepts bring to it). Really curious what you have up your sleeve. OTOH, I understand that you probably have experiments and thoughts under way that you'd rather work on than discuss or explain, so back to lurking for some decent interval ;-) Regards, Bengt Richter From arigo at tunes.org Tue Apr 17 10:19:41 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 17 Apr 2012 10:19:41 +0200 Subject: [pypy-dev] Question about stm_descriptor_init(), tasklets and OS threads In-Reply-To: <1334616292.8656.YahooMailNeo@web120705.mail.ne1.yahoo.com> References: <1334616292.8656.YahooMailNeo@web120705.mail.ne1.yahoo.com> Message-ID: Hi Andrew, On Tue, Apr 17, 2012 at 00:44, Andrew Francis wrote: > I am looking at stm_descriptor_init(). Right now makes a call to > pthread_self(). In a potential Stackless prototype, I would want it to get > the current tasklet instead. I don't understand why at all, sorry. I will stick to my position that the Stackless module should be modified to use the transaction module internally, and that no editing of the low-level RPython and C code is necessary. It is possible that using the transaction module in pure Python from lib_pypy/stackless.py is not really working, in which case you may need to edit pypy/module/_continuation instead and call directly pypy.rlib.rstm in RPython. But you definitely don't need to edit anything at a lower level. A bient?t, Armin. From arigo at tunes.org Tue Apr 17 10:31:52 2012 From: arigo at tunes.org (Armin Rigo) Date: Tue, 17 Apr 2012 10:31:52 +0200 Subject: [pypy-dev] The Work Plan Re: STM proposal funding In-Reply-To: <4F8D0C8F.3020205@oz.net> References: <1332775738.67346.YahooMailNeo@web120702.mail.ne1.yahoo.com> <1332951801.7099.YahooMailNeo@web120701.mail.ne1.yahoo.com> <1333841132.81890.YahooMailNeo@web120704.mail.ne1.yahoo.com> <1334506022.63382.YahooMailNeo@web120706.mail.ne1.yahoo.com> <4F8D0C8F.3020205@oz.net> Message-ID: Hi Bengt, I feel like I have actually already explained over and over again what I am doing. But it's true that such communication has been going on on various channels. So, here is the documentation I've got so far: https://bitbucket.org/pypy/pypy/raw/stm-gc/lib_pypy/transaction.py In particular, this is not OCM, because it doesn't work with explicit yields. It has typically longer transactions; the essential point is that it is not possible to call a random function that will unexpectedly break the transaction in two parts by calling "yield". A bient?t, Armin. From amauryfa at gmail.com Tue Apr 17 10:58:29 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 17 Apr 2012 10:58:29 +0200 Subject: [pypy-dev] update (+patch) on embedding pypy In-Reply-To: References: Message-ID: 2012/4/17 Kibeom Kim > It's not my needs, but I think that's better-designed interpreter lib. > If pypy decides to support, I guess there should be two versions of > api sets, one for CPython api compatibility, and the other one for > multiple independent interpreter support. > > Py_Initialize(); > Py_Initialize(PyPyInterpreter pypyinterpreter); > > PyObject* PyObject_CallObject(PyObject *callable_object, PyObject *args) > PyObject* PyObject_CallObject(PyPyInterpreter pypyinterpreter, PyObject > *callable_object, PyObject *args) > > But maybe no one needs it... I don't know.. > This is an interesting evolution indeed; and pypy code (written in RPython) already passes a "space" object to every function. but having several object spaces in the same binary does not work at the moment. And even once this is sorted out, I think it's a bad idea to reproduce the CPython API, only to add this new parameter. I'm sure a better C API could be designed, that would integrate more easily with pypy implementation. (Use handles instead of pointers, don't expose concrete objects, etc) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at nicheblogslink.com Thu Apr 19 13:01:35 2012 From: alex at nicheblogslink.com (alex at nicheblogslink.com) Date: Thu, 19 Apr 2012 07:01:35 -0400 Subject: [pypy-dev] Selling Blog posts / Blog links on 1000 blogs Message-ID: <380-22012441911135487@M2W121.mail2web.com> Hello, We are a leading Blog Posts/Blog links selling company with a network of High Quality blogs hosted on different C class Unique I.P's (Niche Wise). We are the only network which doesn?t get affected with the recent Google (Panda) update since we maintain different servers, different i.p?s, unique contents and who.is protected servers. And our customers are experiencing very good results. We provide a vast range of difference in categories of blogs and they are very good in quality with Moderate OBL. The above link gives you a very good interface to know about our Blog posts/Blog roll services where you could place your orders directly. We provide 24/7 live chat assistance where you could experience 25+ live members ready to assist your needs. Kindly see below for the updated price ranges along with package details for both Blog Posts and Blog roll Links: SOME FEATURES OF OUR BLOGS: ? Indexing rates of our blogs are very high (3-5 days) ? Our servers are Who.Is Protected (Private name servers) ? The network is spread across Unique C class I.P's with different Name server. ? Report will be provided in Excel with all details. (URL/PR/IP/TITLE OF POST) ? Page rank of the blogs (1 - 6) ? Links are regularly monitored to avoid Missing of Links. ? Replacements are guaranteed when the blogs lose their PR value. BLOG POSTS PACKAGES: BASIC PACKAGE (PR1 to PR5): 10 Blog Posts - $60 25 Blog Posts - $120 50 Blog Posts - $225 100 Blog Posts - $400 250 Blog Posts - $750 HIGH PR'S BLOG POSTS PACKAGE (PR4 to PR8 - 1 Link in Each Post): 25 Blog Posts - $375 50 Blog Posts - $600 75 Blog Posts - $750 All the posts will be 100% unique. BLOGROLL LINKS PACKAGE: PR 0 - $1 PR 1 - $3 PR 2 - $5 PR 3 - $8 PR 4 - $12 PR 5 - $25 PR 6 - $50 PR 7 - $100 PR 8 - $200 PR 9 - $400 For all the packages the report will be submitted within 48 - 72 hours of time. If you want, you can customize the package according to your choice. In High PR's the OBL is less and placing links on them would definitely improve your position. Kindly provide us with further details as we are ready to start the work process. Waiting for your positive reply! Have a good day. Regards, Alex. -------------------------------------------------------------------- myhosting.com - Premium Microsoft? Windows? and Linux web and application hosting - http://link.myhosting.com/myhosting From arigo at tunes.org Thu Apr 19 16:51:08 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 19 Apr 2012 16:51:08 +0200 Subject: [pypy-dev] pypy MemoryError crash In-Reply-To: <1334681919.59207.YahooMailNeo@web45510.mail.sp1.yahoo.com> References: <1333498491.32373.YahooMailNeo@web45509.mail.sp1.yahoo.com> <4F7BF7C9.901@gmx.de> <1334681919.59207.YahooMailNeo@web45510.mail.sp1.yahoo.com> Message-ID: Hi Roger, On Tue, Apr 17, 2012 at 18:58, Roger Flores wrote: > Were either of you able to confirm that it's easy to make PyPy crash with a > MemoryError? Ah, I missed the fact that you replied to us privately. That's generally a very bad idea, because if we are busy or forget about your bug, others would normally jump in. I'm not able to reproduce the bug you have: on my machine, compressing works fine, but decompressing consumes (slowly) more and more memory. I guess that this behavior is unexpected, unless you really have a reason. The point is that I'm running out of RAM (2GB total) before seeing any crash. I guess that the crash depends very finely on how much free memory+swap you have. Can you tell us these numbers? I could try to reproduce the crash using "ulimit", if necessary on machines with more RAM. However, looking carefully at the traceback, it is in theory possible to get it if the programs just fits in the memory, and tries to quit, calling the shut-down functions. These shut-down functions might need to allocate a little bit more. If this triggers collection, and if collection raises MemoryError, then we could get such a traceback. A bient?t, Armin. From aidembb at yahoo.com Thu Apr 19 18:50:55 2012 From: aidembb at yahoo.com (Roger Flores) Date: Thu, 19 Apr 2012 09:50:55 -0700 (PDT) Subject: [pypy-dev] pypy MemoryError crash In-Reply-To: References: <1333498491.32373.YahooMailNeo@web45509.mail.sp1.yahoo.com> <4F7BF7C9.901@gmx.de> <1334681919.59207.YahooMailNeo@web45510.mail.sp1.yahoo.com> Message-ID: <1334854255.45995.YahooMailNeo@web45503.mail.sp1.yahoo.com> >Ah, I missed the fact that you replied to us privately. Sorry, I didn't want to send an attachment to everyone on the list and I didn't see much other interest.? I'm happy to try a reminder email before asking everyone again. >on my machine, compressing works fine, but decompressing consumes (slowly) more and more memory. Good, it should compress just fine.? And normally decompression works fine too.? But the special version I sent contains the one character bug that should result in it slowly consuming all memory.? This is the goal, so it triggers the MemoryError issue.? Python throws a MemoryError and displays a traceback.? The Pypy bug is that it dies in Rpython, displaying a Rpython traceback, instead of throwing a MemoryError, and showing a traceback of my code as it runs out of memory.? I missed this distinction for a while. >The point is that I'm running out of RAM (2GB total) before seeing any crash.? I guess that the crash depends very finely on how much free memory+swap you have. On my Windows laptop I have 8GB but I can see it only using a couple GB before crashing.? I also run Ubuntu in VirtualBox, and it's set to 2GB of memory like your computer, plus some amount of swap.? The decompression only runs for a minute or so before the MemoryError happens.? Basically if you have enough memory to compress, I'd think you'd have enough memory. >it is in theory possible to get it if the programs just fits in the memory, and tries to quit, calling the shut-down functions.? These shut-down functions might need to allocate a little bit more. I've seen the Pypy MemoryError work just fine.? I think this is a unique case that hits that code just right to break it, along the lines of your thinking. -Roger ________________________________ From: Armin Rigo To: Roger Flores ; PyPy Developer Mailing List Sent: Thursday, April 19, 2012 7:51 AM Subject: Re: [pypy-dev] pypy MemoryError crash Hi Roger, On Tue, Apr 17, 2012 at 18:58, Roger Flores wrote: > Were either of you able to confirm that it's easy to make PyPy crash with a > MemoryError? Ah, I missed the fact that you replied to us privately.? That's generally a very bad idea, because if we are busy or forget about your bug, others would normally jump in. I'm not able to reproduce the bug you have: on my machine, compressing works fine, but decompressing consumes (slowly) more and more memory. I guess that this behavior is unexpected, unless you really have a reason.? The point is that I'm running out of RAM (2GB total) before seeing any crash.? I guess that the crash depends very finely on how much free memory+swap you have. Can you tell us these numbers?? I could try to reproduce the crash using "ulimit", if necessary on machines with more RAM.? However, looking carefully at the traceback, it is in theory possible to get it if the programs just fits in the memory, and tries to quit, calling the shut-down functions.? These shut-down functions might need to allocate a little bit more.? If this triggers collection, and if collection raises MemoryError, then we could get such a traceback. A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewfr_ice at yahoo.com Thu Apr 19 19:43:34 2012 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Thu, 19 Apr 2012 10:43:34 -0700 (PDT) Subject: [pypy-dev] Question about stm_descriptor_init(), tasklets and OS threads References: <1334616292.8656.YahooMailNeo@web120705.mail.ne1.yahoo.com> Message-ID: <1334857414.94198.YahooMailNeo@web120705.mail.ne1.yahoo.com> Hi Armin: ________________________________ From: Armin Rigo To: Andrew Francis Cc: Py Py Developer Mailing List Sent: Tuesday, April 17, 2012 4:19 AM Subject: Re: Question about stm_descriptor_init(), tasklets and OS threads >I don't understand why at all, sorry.? Please bear with me :-). I am in the same position now as I was in 2007 when I was trying to? make Stackless interoperate with Twisted. A lot of silly questions. A lot of misconceptions. A lot of looking at code to see?how things worked. And some dusting off the old Operating? System text books.? >?I will stick to my position?that the Stackless module should be modified to use the transaction >module internally,?and that no editing of the low-level RPython and C >code is necessary.?? Noted. Again, when you write a position paper, this would be listed as a fundamental design principle.? >It is possible that using the transaction module >in pure Python from lib_pypy/stackless.py is not really working, in >which case you may need to edit pypy/module/_continuation instead and >call directly pypy.rlib.rstm in RPython.? But you definitely don't >need to edit anything at a lower level. I am trying to understand enough to get into a position to attempt an integration. I will start with a Stackless bank account programme (I have written a version in RPython). ?A very simple programme to write. To make?things interesting, my plan is?to make one?tasklet call? schedule() hence causing a contention. It is?important to note there is only one OS thread in action.? However what is not clear to me and you are in a better position to answer is whether the underlying low-level rstm machinery cares that it is user space tasklets, not OS threads that are?the units-of-?execution that are causing a contention.? ? Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Apr 19 21:37:15 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 19 Apr 2012 21:37:15 +0200 Subject: [pypy-dev] Question about stm_descriptor_init(), tasklets and OS threads In-Reply-To: <1334857414.94198.YahooMailNeo@web120705.mail.ne1.yahoo.com> References: <1334616292.8656.YahooMailNeo@web120705.mail.ne1.yahoo.com> <1334857414.94198.YahooMailNeo@web120705.mail.ne1.yahoo.com> Message-ID: Hi Andrew, On Thu, Apr 19, 2012 at 19:43, Andrew Francis wrote: > I am trying to understand enough to get into a position to attempt an > integration. I believe you are trying to approach the problem from the bottom-most level up --- which is a fine way to approach problems; but in this case, you are missing that there are still a few levels between were you got so far and the destination, which is the stackless.py module in pure Python. Let us try to approach it top-down instead, because there are far less levels to dig through. The plan is to make any existing stackless-using program work on multiple cores. Take a random existing stackless example, and start to work by editing the pure Python lib_pypy/stackless.py (or, at first, writing a new version from scratch, copying parts of the original). The idea is to use the "transaction" module. The goal would be to not use the _squeue, which is a deque of pending tasklets, but instead to add pending tasklets with transaction.add(callback). Note how the notion of tasklets in the _squeue (which offers some specific order) is replaced by the notion of which callbacks have been added. See below for what is in each callback. The transaction.run() would occur in the main program, directly called by stackless.schedule(). So the callbacks are all invoked in the main program too; in fact all the "transaction" dispatching is done in the main program, and only scheduling new tasklets can occur anywhere. The callbacks would just contain a switch to the tasklet. When the tasklet comes back, if it is not finished, re-add() it. This is all. You have to make sure that all tasklet.switch()es internally go back to the main program, and not directly to another tasklet. This should ensure that the duration of every transaction is exactly the time in a tasklet between two calls to switch(). Of course, this can all be written and tested against "transaction.py", the pure Python emulator. Once it is nicely working, you are done. You just have to wait for a continuation-capable version of pypy-stm; running the same program on it, you'll get multi-core usage. A bient?t, Armin. From arigo at tunes.org Thu Apr 19 22:08:05 2012 From: arigo at tunes.org (Armin Rigo) Date: Thu, 19 Apr 2012 22:08:05 +0200 Subject: [pypy-dev] Question about stm_descriptor_init(), tasklets and OS threads In-Reply-To: References: <1334616292.8656.YahooMailNeo@web120705.mail.ne1.yahoo.com> <1334857414.94198.YahooMailNeo@web120705.mail.ne1.yahoo.com> Message-ID: Re-Hi, On Thu, Apr 19, 2012 at 21:37, Armin Rigo wrote: > You have to make sure that all tasklet.switch()es internally go back > to the main program, and not directly to another tasklet. Ah, sorry, I confused the stackless interface. You don't switch() to tasklets, but instead call send() and receive() on channels. It's basically the same: whenever we call either send() or receive() on a channel, we internally switch back to the main program, i.e. back into the transaction's callback. This one needs to figure out, depending on the channel state and the operation we do, if the same tasklet can continue to run now or not --- which is done with transaction.add(callback-continuing-the-same-tasklet) --- and also if another blocked tasklet can now proceed --- which is done with transaction.add(callback-continuing-the-other-tasklet). So depending on the cases it will add() zero, one or two more transactions, and then finish. A bient?t, Armin. From rorsoft at gmail.com Fri Apr 20 06:13:50 2012 From: rorsoft at gmail.com (gmail) Date: Fri, 20 Apr 2012 12:13:50 +0800 Subject: [pypy-dev] output readable c Message-ID: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> I find pypy translator output c files consist too many 'goto' statement. Its hard to read and understand these c files. I try to make it output with c keywords: if..else.. while...break...continue and now the output c file looks pretty better. my pypy version is pypy-pypy-2346207d9946 download from: https://bitbucket.org/pypy/pypy/get/release-1.8.zip test sample input file a2.py: import sys def entry_point(argv): a = [1,2,3,4] a.extend([4,5]) print a return len(a) def target(*args): return entry_point, None if __name__ == '__main__': entry_point(sys.argv) after run command: translator\goal\translate.py a2.py I can find file a2.c in my temperary directory. The funcion pypy_g_entry_point in it is 662 lines and contains 103 goto. after replace 2 attach files : pypy\translator\c\funcgen.py pypy\translator\c\bookaa_cpp.py and run the command again, I get a2.c with pypy_g_entry_point is 539 lines and only contains 20 goto. I am still work hard try to improve pypy to get readable c++ output. Anyone interest in this ? Bookaa -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: bookaa_cpp.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: funcgen.py URL: From fijall at gmail.com Fri Apr 20 09:09:12 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 20 Apr 2012 09:09:12 +0200 Subject: [pypy-dev] output readable c In-Reply-To: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> Message-ID: On Fri, Apr 20, 2012 at 6:13 AM, gmail wrote: > ** > I find pypy translator output c files consist too many 'goto' statement. > Its hard to read and understand these c files. > I try to make it output with c keywords: > if..else.. > while...break...continue > and now the output c file looks pretty better. > > my pypy version is pypy-pypy-2346207d9946 download from: > https://bitbucket.org/pypy/pypy/get/release-1.8.zip > > > test sample input file a2.py: > > import sys > > def entry_point(argv): > a = [1,2,3,4] > a.extend([4,5]) > print a > return len(a) > > def target(*args): > return entry_point, None > > if __name__ == '__main__': > entry_point(sys.argv) > > after run command: > translator\goal\translate.py a2.py > > I can find file a2.c in my temperary directory. The funcion > pypy_g_entry_point in it is 662 lines and contains 103 goto. > > after replace 2 attach files : > pypy\translator\c\funcgen.py > pypy\translator\c\bookaa_cpp.py > and run the command again, I get a2.c with pypy_g_entry_point is 539 > lines and only contains 20 goto. > > I am still work hard try to improve pypy to get readable c++ output. > Anyone interest in this ? > > Bookaa > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > Hi. Your code does not contain any tests - we won't accept code that's untested. Second, please send your patches in diff format so we can have a better look on what you have changed (hg diff sounds like a good plan) Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Apr 20 10:47:40 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 20 Apr 2012 10:47:40 +0200 Subject: [pypy-dev] [cpyext] partial fake PEP393 implementation to provide access to single unicode characters in strings In-Reply-To: References: Message-ID: Hi Stefan, On Sat, Apr 14, 2012 at 18:44, Stefan Behnel wrote: > PEP393 (the new Unicode type in Py3.3) defines a rather useful C interface > towards the characters of a Unicode string. I think it would be cool if > cpyext provided that, so that access to single characters won't require > copying the unicode buffer into C space anymore. FWIW, if it makes sense, you can add PyPy-specific API functions not in the standard CPython C API, too. I'm thinking about accessing *string* characters, for example. > Specifically, the > intention is to avoid creating a 1-character unicode string copy before > taking its ord(). Does this happen automatically, or is there a way to make > sure it does that? In RPython, indexing a string returns a single char, which is a different low-level type than a full string (just "char" in C). A bient?t, Armin. From amauryfa at gmail.com Fri Apr 20 11:02:57 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 20 Apr 2012 11:02:57 +0200 Subject: [pypy-dev] [cpyext] partial fake PEP393 implementation to provide access to single unicode characters in strings In-Reply-To: References: Message-ID: 2012/4/20 Armin Rigo > On Sat, Apr 14, 2012 at 18:44, Stefan Behnel wrote: > > PEP393 (the new Unicode type in Py3.3) defines a rather useful C > interface > > towards the characters of a Unicode string. I think it would be cool if > > cpyext provided that, so that access to single characters won't require > > copying the unicode buffer into C space anymore. > > FWIW, if it makes sense, you can add PyPy-specific API functions not > in the standard CPython C API, too. I'm thinking about accessing > *string* characters, for example. But is it desirable? The first call to PyUnicode_AsUnicode will allocate and copy the unicode buffer, but subsequent calls will quickly return the same address. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Apr 20 11:16:01 2012 From: arigo at tunes.org (Armin Rigo) Date: Fri, 20 Apr 2012 11:16:01 +0200 Subject: [pypy-dev] [cpyext] partial fake PEP393 implementation to provide access to single unicode characters in strings In-Reply-To: References: Message-ID: Hi Amaury, On Fri, Apr 20, 2012 at 11:02, Amaury Forgeot d'Arc wrote: > But is it desirable? The first call to PyUnicode_AsUnicode will allocate and > copy the unicode buffer, > but subsequent calls will quickly return the same address. Indeed, it's a bit unclear. If I may repeat myself, I still think that the performance problems of cpyext are really due to the costly double-mapping between PyPy's real objects and PyObjects, together with INCREF/DECREF being function calls. This is the first place I would look at if I were concerned about it. (Stefan: see a previous mail where I described how to start.) A bient?t, Armin. From stefan_ml at behnel.de Fri Apr 20 21:31:18 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 20 Apr 2012 21:31:18 +0200 Subject: [pypy-dev] [cpyext] partial fake PEP393 implementation to provide access to single unicode characters in strings In-Reply-To: References: Message-ID: Armin Rigo, 20.04.2012 11:16: > On Fri, Apr 20, 2012 at 11:02, Amaury Forgeot d'Arc wrote: >> But is it desirable? The first call to PyUnicode_AsUnicode will allocate and >> copy the unicode buffer, >> but subsequent calls will quickly return the same address. > > Indeed, it's a bit unclear. If I may repeat myself, I still think > that the performance problems of cpyext are really due to the costly > double-mapping between PyPy's real objects and PyObjects, together > with INCREF/DECREF being function calls. This is the first place I > would look at if I were concerned about it. Well, have you seen my macro changes in issue 1121? https://bugs.pypy.org/issue1121 At least for the new ref-counting macros, I already presented the usual stupid micro benchmark in a previous mail, giving me almost a factor of 2 in performance for objects with a ref-count > 1 in C space. I'll add the numbers to the ticket. Stefan From alexander.pyattaev at tut.fi Sat Apr 21 00:54:18 2012 From: alexander.pyattaev at tut.fi (Alexander Pyattaev) Date: Sat, 21 Apr 2012 01:54:18 +0300 Subject: [pypy-dev] output readable c In-Reply-To: References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> Message-ID: <7695392.nq74qgiBlb@hunter-laptop> What is the purpose? For the target c/c++ compiler it is all the same, is not it? Or is the purpose to make a python->c++ converter? Cheers, Alex perjantai 20 huhtikuu 2012 09:09:12 Maciej Fijalkowski kirjoitti: On Fri, Apr 20, 2012 at 6:13 AM, gmail wrote: I find pypy translator output c files consist too many 'goto' statement. Its hard to read and understand these c files. I try to make it output with c keywords: if..else.. while...break...continue and now the output c file looks pretty better. my pypy version is pypy-pypy-2346207d9946 download from: https://bitbucket.org/pypy/pypy/get/release-1.8.zip test sample input file a2.py: import sys def entry_point(argv): a = [1,2,3,4] a.extend([4,5]) print a return len(a) def target(*args): return entry_point, None if __name__ == '__main__': entry_point(sys.argv) after run command: translator\goal\translate.py a2.py I can find file a2.c in my temperary directory. The funcion pypy_g_entry_point in it is 662 lines and contains 103 goto. after replace 2 attach files : pypy\translator\c\funcgen.py pypy\translator\c\bookaa_cpp.py and run the command again, I get a2.c with pypy_g_entry_point is 539 lines and only contains 20 goto. I am still work hard try to improve pypy to get readable c++ output. Anyone interest in this ? Bookaa _______________________________________________ pypy-dev mailing list pypy-dev at python.org http://mail.python.org/mailman/listinfo/pypy-dev Hi. Your code does not contain any tests - we won't accept code that's untested. Second, please send your patches in diff format so we can have a better look on what you have changed (hg diff sounds like a good plan) Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Sat Apr 21 01:48:19 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 21 Apr 2012 01:48:19 +0200 Subject: [pypy-dev] output readable c In-Reply-To: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> Message-ID: 2012/4/20 gmail > I am still work hard try to improve pypy to get readable c++ output. > Anyone interest in this ? > The result is much better, especially with long functions. Yes, this is interesting! Continue! Your code needs to be polished though: comments, better names, pep8 convention... And also an explanation of the algorithms. Try to run the tests, you will see a missing import, and some tests fail (in translator/c/test/test_backendoptimized.py, the ones with "switch") Also I've seen a duplicate label when translating targetnopstandalone.py fijal, I don't think it's easy to write tests for funcgen. Or maybe only count the goto statements, like JIT tests count external calls? In the end someone will have to check the generated assembler in some cases. I remember that the "_back" labels were added to have loops look like loops, and help gcc. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From rorsoft at gmail.com Sat Apr 21 05:01:49 2012 From: rorsoft at gmail.com (bookaa) Date: Sat, 21 Apr 2012 11:01:49 +0800 Subject: [pypy-dev] output readable c In-Reply-To: <7695392.nq74qgiBlb@hunter-laptop> References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> <7695392.nq74qgiBlb@hunter-laptop> Message-ID: <11A42788052D434D9153D381BA53F8A6@vSHliutaotao> yes, pypy's output c files is enough for c compilers. But its terrible if you want to read the c source codes. I really very interest in make a python to c++ converter, based on pypy translator. thanks Bookaa From: Alexander Pyattaev Sent: Saturday, April 21, 2012 6:54 AM To: pypy-dev at python.org Subject: Re: [pypy-dev] output readable c What is the purpose? For the target c/c++ compiler it is all the same, is not it? Or is the purpose to make a python->c++ converter? Cheers, Alex perjantai 20 huhtikuu 2012 09:09:12 Maciej Fijalkowski kirjoitti: On Fri, Apr 20, 2012 at 6:13 AM, gmail wrote: I find pypy translator output c files consist too many 'goto' statement. Its hard to read and understand these c files. I try to make it output with c keywords: if..else.. while...break...continue and now the output c file looks pretty better. my pypy version is pypy-pypy-2346207d9946 download from: https://bitbucket.org/pypy/pypy/get/release-1.8.zip test sample input file a2.py: import sys def entry_point(argv): a = [1,2,3,4] a.extend([4,5]) print a return len(a) def target(*args): return entry_point, None if __name__ == '__main__': entry_point(sys.argv) after run command: translator\goal\translate.py a2.py I can find file a2.c in my temperary directory. The funcion pypy_g_entry_point in it is 662 lines and contains 103 goto. after replace 2 attach files : pypy\translator\c\funcgen.py pypy\translator\c\bookaa_cpp.py and run the command again, I get a2.c with pypy_g_entry_point is 539 lines and only contains 20 goto. I am still work hard try to improve pypy to get readable c++ output. Anyone interest in this ? Bookaa _______________________________________________ pypy-dev mailing list pypy-dev at python.org http://mail.python.org/mailman/listinfo/pypy-dev Hi. Your code does not contain any tests - we won't accept code that's untested. Second, please send your patches in diff format so we can have a better look on what you have changed (hg diff sounds like a good plan) Cheers, fijal -------------------------------------------------------------------------------- _______________________________________________ pypy-dev mailing list pypy-dev at python.org http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rorsoft at gmail.com Sat Apr 21 05:05:32 2012 From: rorsoft at gmail.com (bookaa) Date: Sat, 21 Apr 2012 11:05:32 +0800 Subject: [pypy-dev] output readable c In-Reply-To: References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> Message-ID: <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> thank you for encourage! as for the bugs, please tell me exacty how to run, which test. My system is Win7, pypy tests get many error even without any change. Bookaa From: Amaury Forgeot d'Arc Sent: Saturday, April 21, 2012 7:48 AM To: gmail Cc: pypy-dev at python.org Subject: Re: [pypy-dev] output readable c 2012/4/20 gmail I am still work hard try to improve pypy to get readable c++ output. Anyone interest in this ? The result is much better, especially with long functions. Yes, this is interesting! Continue! Your code needs to be polished though: comments, better names, pep8 convention... And also an explanation of the algorithms. Try to run the tests, you will see a missing import, and some tests fail (in translator/c/test/test_backendoptimized.py, the ones with "switch") Also I've seen a duplicate label when translating targetnopstandalone.py fijal, I don't think it's easy to write tests for funcgen. Or maybe only count the goto statements, like JIT tests count external calls? In the end someone will have to check the generated assembler in some cases. I remember that the "_back" labels were added to have loops look like loops, and help gcc. -- Amaury Forgeot d'Arc -------------------------------------------------------------------------------- _______________________________________________ pypy-dev mailing list pypy-dev at python.org http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rorsoft at gmail.com Sat Apr 21 05:12:09 2012 From: rorsoft at gmail.com (bookaa) Date: Sat, 21 Apr 2012 11:12:09 +0800 Subject: [pypy-dev] A test bug in Windows system Message-ID: <31B28641F5774BE2BFCA742B1C8B8750@vSHliutaotao> My os is Win7, Trying pypy-pypy-2346207d9946 run: pypy>test_all.py translator\test\test_unsimplify.py will get a error: WindowsError: [Error 5] : .... it can not os.unlink(..) a file. I find this is because fd = os.open(tmpfile, os.O_WRONLY | os.O_CREAT, 0) will create a readonly file in Windows system. It should be fd = os.open(tmpfile, os.O_WRONLY | os.O_CREAT) After fix this, still many test errors.... Bookaa -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Apr 21 08:11:39 2012 From: arigo at tunes.org (Armin Rigo) Date: Sat, 21 Apr 2012 08:11:39 +0200 Subject: [pypy-dev] kwargsdict-strategy Message-ID: Hi Carl Friedrich, Can you have a look at speed.python.org? It seems that merging the kwargsdict-strategy branch had mostly the effect of making twisted_iteration slower :-( Armin From amauryfa at gmail.com Sat Apr 21 09:51:28 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Sat, 21 Apr 2012 09:51:28 +0200 Subject: [pypy-dev] output readable c In-Reply-To: <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> Message-ID: 2012/4/21 bookaa > ** > thank you for encourage! > > as for the bugs, please tell me exacty how to run, which test. > > My system is Win7, pypy tests get many error even without any change. > Tests in the translator/c directory should pass. For example: c:\python27\python pytest.py pypy\translator\c\test\test_backendoptimized.py And to translate the small (!) "hello world" target: c:\python27\python pypy\translator\goal\translate.py -O2 pypy\translator\goal\targetnopstandalone.py -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Apr 21 12:40:51 2012 From: arigo at tunes.org (Armin Rigo) Date: Sat, 21 Apr 2012 12:40:51 +0200 Subject: [pypy-dev] A test bug in Windows system In-Reply-To: <31B28641F5774BE2BFCA742B1C8B8750@vSHliutaotao> References: <31B28641F5774BE2BFCA742B1C8B8750@vSHliutaotao> Message-ID: Hi Bookaa, 2012/4/21 bookaa : > run: > ????pypy>test_all.py translator\test\test_unsimplify.py > > will get a error: > ????WindowsError: [Error 5] : .... > it can not os.unlink(..) a file. Yes, this is because you are using pypy to run test_all.py. Please use CPython. There are issues like that, particularly on Windows, that we never digged into. (This one is due to http://pypy.org/compat.html : PyPy does not support refcounting semantics, so files are not closed immediately.) A bient?t, Armin. From iwfweb015 at gmail.com Sat Apr 21 22:22:26 2012 From: iwfweb015 at gmail.com (info web) Date: Sat, 21 Apr 2012 20:22:26 +0000 Subject: [pypy-dev] Invitation: mendesak pesan! dari Mrs. Esther. @ Sat Apr 21 1:30pm - 2:30pm (pypy-dev@codespeak.net) Message-ID: <20cf307ca34adc714004be362953@google.com> You have been invited to the following event. Title: mendesak pesan! dari Mrs. Esther. Nama saya Mrs. Esther Davies dari Singapore. Saya berusia 63 tahun, saya wanita yang sedang sekarat dan saya memutuskan untuk mendonasikan apa yang saya punya untuk kamu/ gereja/ yatim piatu/ orang miskin dan wanita yang sudah bercerai. Saya didiagnosa mempunyai kanker 2 tahun lalu. Saya telah disentuh Tuhan dan ingin mendonasikan apa yang saya punya dari almarhum suami saya untuk seseorang yang bekerja melayani Tuhan. Saya selalu memohon ampun pada Tuhan dan saya percaya Dia, karena Dia adalah Tuhan yang baik. Saya akan menjalani operasi minggu depan. Saya putuskan untuk mendonasikan sejumlah uang senilai 7.7 juta dollars untuk kamu yang bekerja melayani Tuhan, dan juga menolong sesama yang membutuhkan seperti yatim piatu, wanita yang diceraikan suaminya ataupun orang yang tidak bisa makan karena kemiskinan. Saat ini saya tidak bisa menjawab telepon karena keluarga saya yang sedang meributkan dana yang sudah saya berikan pada mereka sebelumnya, mereka terus menerus berada didekat saya dan juga karena kondisi kesehatan saya. Saya hanya ingin mewujudkan keinginan saya untuk beramal dan pengacara saya menghargai keputusan saya. Saya berdoa yang terbaik untukmu dan tolong gunakan dana saya ini sebaik mungkin untuk berbagi kepada semua orang yang membutuhkan bantuan. Setelah kamu balas saya, saya akan berikan informasi mengenai apa saja yang saya butuhkan darimu kemudian kamu telepon Bank saya dan bilang bahwa saya akan memberikan uang sejumlah 7.7 juta dollars untukmu dengan memberitahukan nomor referensi saya dan saya juga akan memberitahu kepada perusahaan security saya kalau benar saya akan berikan dana ini kepada kamu untuk digunakan untuk kebaikan, saling tolong menolong dan melindungi kaum yang kurang beruntung. Saya tidak mengenal kamu tetapi saya melakukan ini atas nama Tuhan sebagai hamba Tuhan yang ingin melayaniNya untuk sesama umat manusia. Terima kasih, Tuhan berkati hidupmu. Salam, Mrs Esther Davies Email: mrsestherdavies16 at yahoo.com When: Sat Apr 21 1:30pm ? 2:30pm Pacific Time Calendar: pypy-dev at codespeak.net Who: (Guest list has been hidden at organizer's request) Event details: https://www.google.com/calendar/event?action=VIEW&eid=bXFlN2I2cmczMmY4c2FjajlzMDdzbnBmZ2MgcHlweS1kZXZAY29kZXNwZWFrLm5ldA&tok=MTkjaXdmd2ViMDE1QGdtYWlsLmNvbTU2ZGNhZmQwZmZjOTAyMTRiNTcyYjk2NzM4NWE2MzVmYmYwYTU0ZTQ&ctz=America/Los_Angeles&hl=en Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account pypy-dev at codespeak.net because you are an attendee of this event. To stop receiving future notifications for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2968 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 3020 bytes Desc: not available URL: From bookaa at rorsoft.com Sun Apr 22 07:01:13 2012 From: bookaa at rorsoft.com (bookaa) Date: Sun, 22 Apr 2012 13:01:13 +0800 Subject: [pypy-dev] output readable c In-Reply-To: References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao><6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> Message-ID: <75689738F1C14BB28CD95C62627B358B@vSHliutaotao> Now I can pass the test_backendoptimized and targetnopstandalone. Files attached. Please tell me if any bugs thanks Bookaa From: Amaury Forgeot d'Arc Sent: Saturday, April 21, 2012 3:51 PM To: bookaa Cc: pypy-dev at python.org Subject: Re: [pypy-dev] output readable c 2012/4/21 bookaa thank you for encourage! as for the bugs, please tell me exacty how to run, which test. My system is Win7, pypy tests get many error even without any change. Tests in the translator/c directory should pass. For example: c:\python27\python pytest.py pypy\translator\c\test\test_backendoptimized.py And to translate the small (!) "hello world" target: c:\python27\python pypy\translator\goal\translate.py -O2 pypy\translator\goal\targetnopstandalone.py -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: bookaa_cpp.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: funcgen.py URL: From fijall at gmail.com Sun Apr 22 11:48:40 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 22 Apr 2012 11:48:40 +0200 Subject: [pypy-dev] output readable c In-Reply-To: <75689738F1C14BB28CD95C62627B358B@vSHliutaotao> References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> <75689738F1C14BB28CD95C62627B358B@vSHliutaotao> Message-ID: Hi. I would really like this sort of changes to come with it's own tests (ones that check what was compiled preferably). On Sun, Apr 22, 2012 at 7:01 AM, bookaa wrote: > ** > Now I can pass the test_backendoptimized and targetnopstandalone. > > Files attached. > > Please tell me if any bugs > > thanks > > Bookaa > > *From:* Amaury Forgeot d'Arc > *Sent:* Saturday, April 21, 2012 3:51 PM > *To:* bookaa > *Cc:* pypy-dev at python.org > *Subject:* Re: [pypy-dev] output readable c > > 2012/4/21 bookaa > >> ** >> thank you for encourage! >> >> as for the bugs, please tell me exacty how to run, which test. >> >> My system is Win7, pypy tests get many error even without any change. >> > > Tests in the translator/c directory should pass. For example: > > c:\python27\python pytest.py > pypy\translator\c\test\test_backendoptimized.py > > And to translate the small (!) "hello world" target: > > c:\python27\python pypy\translator\goal\translate.py -O2 > pypy\translator\goal\targetnopstandalone.py > > -- > Amaury Forgeot d'Arc > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Sun Apr 22 15:40:30 2012 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Sun, 22 Apr 2012 15:40:30 +0200 Subject: [pypy-dev] kwargsdict-strategy In-Reply-To: References: Message-ID: Hi Armin, Armin Rigo wrote: >Can you have a look at speed.python.org? It seems that merging the >kwargsdict-strategy branch had mostly the effect of making >twisted_iteration slower :-( Yes, seems Twisted is really the worst case for argument matching. My new branch should help, I can disable kwargs on default till it's ready (tomorrow, when I have proper Internet). Cheers, Carl Friedrich From skip at pobox.com Mon Apr 23 01:35:05 2012 From: skip at pobox.com (skip at pobox.com) Date: Sun, 22 Apr 2012 18:35:05 -0500 Subject: [pypy-dev] output readable c In-Reply-To: References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> <75689738F1C14BB28CD95C62627B358B@vSHliutaotao> Message-ID: <20372.38313.972033.184727@montanaro.dyndns.org> Maciej> I would really like this sort of changes to come with it's own Maciej> tests (ones that check what was compiled preferably). What might such tests look like? That is, how would they be different than tests which demonstrate that the current translation code is correct? (Are there translation-only unit tests for the current translator?) Wouldn't passing the existing tests be sufficient? I'm not at all familiar with the current code base, so I might be way off-base here. If so, my apologies. -- Skip Montanaro - skip at pobox.com - http://www.smontanaro.net/ From wlavrijsen at lbl.gov Mon Apr 23 07:08:48 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Sun, 22 Apr 2012 22:08:48 -0700 (PDT) Subject: [pypy-dev] cppyy: C++ bindings for PyPy Message-ID: Hi, as detailed in a couple of blog posts in the past (*), for some time now we have been working on making C++ bindings through the Reflex package available on PyPy, in the form of the "cppyy" module. Software is never done, and that is true also in this case, but it has reached a level of maturity at which it can be said to be usable. Initial docs are now up on: http://doc.pypy.org/en/latest/cppyy.html with instructions on how to setup the reflex-support branch and how to test it out. The documentation will be updated with more advanced and detailed topics in the next couple of weeks or so. There's a sizable (non-PyPy) set of unit tests that still need to be worked through, thus development will steadily continue at its current pace. Still, if you find that cppyy almost works for you except for a particular feature, feel free to ask for it to be prioritized. The biggest obstacle for most people (that are not in the field of HEP) will be the current set of dependencies. Although the dependency set for cppyy is really only Reflex, which could be distributed separately, for the CPython equivalent code, the dependency set is a large portion of the ROOT class library. The medium-term plan then is to use Cling, which is based on CLang from llvm (http://root.cern.ch/drupal/content/cling) as the back-end. For this, there will be a PyCling on CPython, and all as stand-alone projects to remove the large dependency set. More interesting though, having Cling on the back of the bindings allows far more advanced features, such as dynamic setup of callbacks and cross language inheritance to put Python code into C++ frameworks. This has been prototyped successfully in Clings predecessor (CINT), but would be very hard to do in Reflex. Of course, an important reason for pushing this code out to the community somewhat early, is that it allows anyone so interested to start hacking on it and help shape it! Best regards, Wim (*) http://morepypy.blogspot.com/2011/08/wrapping-c-libraries-with-reflection.html http://morepypy.blogspot.com/2010/07/cern-sprint-report-wrapping-c-libraries.html -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From arigo at tunes.org Mon Apr 23 08:19:43 2012 From: arigo at tunes.org (Armin Rigo) Date: Mon, 23 Apr 2012 08:19:43 +0200 Subject: [pypy-dev] output readable c In-Reply-To: <20372.38313.972033.184727@montanaro.dyndns.org> References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> <75689738F1C14BB28CD95C62627B358B@vSHliutaotao> <20372.38313.972033.184727@montanaro.dyndns.org> Message-ID: Hi, On Mon, Apr 23, 2012 at 01:35, wrote: > ? ?Maciej> I would really like this sort of changes to come with it's own > ? ?Maciej> tests (ones that check what was compiled preferably). > > What might such tests look like? It is an issue: unit-testing this kind of "detail" is a bit hard, I agree. And the sheer size of the proposed new file is daunting: unless I'm wrong it is *bigger* than any other file in translator/c/. There is still something we could do. For a start, making sure that *all* tests pass everywhere, not just test_backendoptimized; including a complete pypy translation, and including the fact that the translated pypy seems to behave correctly (i.e. runs its own tests fine). Then someone needs to make additional tests that stress all branches in the new code --- additions in the same style as test_backendoptimized, but written specifically to test uncommon paths in your code. This is useful even if it only tests that the C compiler is happy with the generated code and the generated code behaves correctly. Bookaa, the person to do that can be you. In that case you need to learn about Mercurial version control and the http://bitbucket.org repository. I would recommend that you register on bitbucket, and create your own fork of "pypy/pypy" to play with. If you don't want to do that, I fear that your code will remain unaccepted, unless someone else jumps in and does it. (In all cases, in a fork, please.) We cannot just take such a large file into the official pypy and be happy. Fwiw we have a few pending bugs of the JIT optimizer's unroller, which is another big piece of code full of "ifs" with incomplete test coverage. I'm not harshly criticising, because in that case the functionality (=speed) is great for the end user; but in your situation you have to realize that it would be adding no functionality at all for the end user, i.e. the user of PyPy. (I don't consider making it easier to read the C code to be an additional feature; if someone needs to read it, he is either busy chasing a really obscure bug of the JIT or the GC (like me, about once a year), or, more likely, he didn't properly test his source code.) A bient?t, Armin. From fijall at gmail.com Mon Apr 23 08:28:21 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 23 Apr 2012 08:28:21 +0200 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: On Mon, Apr 23, 2012 at 7:08 AM, wrote: > Hi, > > as detailed in a couple of blog posts in the past (*), for some time now we > have been working on making C++ bindings through the Reflex package > available > on PyPy, in the form of the "cppyy" module. Software is never done, and > that > is true also in this case, but it has reached a level of maturity at which > it > can be said to be usable. Initial docs are now up on: > > http://doc.pypy.org/en/latest/**cppyy.html > > with instructions on how to setup the reflex-support branch and how to test > it out. The documentation will be updated with more advanced and detailed > topics in the next couple of weeks or so. > > There's a sizable (non-PyPy) set of unit tests that still need to be worked > through, thus development will steadily continue at its current pace. > Still, > if you find that cppyy almost works for you except for a particular > feature, > feel free to ask for it to be prioritized. > > The biggest obstacle for most people (that are not in the field of HEP) > will > be the current set of dependencies. Although the dependency set for cppyy > is > really only Reflex, which could be distributed separately, for the CPython > equivalent code, the dependency set is a large portion of the ROOT class > library. The medium-term plan then is to use Cling, which is based on CLang > from llvm (http://root.cern.ch/drupal/**content/cling) > as the back-end. For > this, there will be a PyCling on CPython, and all as stand-alone projects > to > remove the large dependency set. > > More interesting though, having Cling on the back of the bindings allows > far more advanced features, such as dynamic setup of callbacks and cross > language inheritance to put Python code into C++ frameworks. This has been > prototyped successfully in Clings predecessor (CINT), but would be very > hard > to do in Reflex. > > Of course, an important reason for pushing this code out to the community > somewhat early, is that it allows anyone so interested to start hacking on > it and help shape it! > > Best regards, > Wim > > (*) http://morepypy.blogspot.com/**2011/08/wrapping-c-libraries-** > with-reflection.html > http://morepypy.blogspot.com/**2010/07/cern-sprint-report-** > wrapping-c-libraries.html > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > Quick question - if it's mature, why not merge it to default? I presume it should be turned off, since there is a sizeable dependency, but still having it in default can be good. Cheers, fijal -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Mon Apr 23 08:42:50 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Sun, 22 Apr 2012 23:42:50 -0700 (PDT) Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: Hi Maciej, > Quick question - if it's mature, why not merge it to default? I presume it > should be turned off, since there is a sizeable dependency, but still > having it in default can be good. the dependency is the main issue: for it to build, it requires headers and libs from Reflex. Completely off (i.e. the module not picked up) could work: there are only a few changes that fall outside of the cppyy module proper (mainly access to the raw internals of arrays and a rule for .cxx in the generated Makefile). However, I'm not sure what the advantage would be, as the branch is regularly kept up to date with merges from default? Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From fijall at gmail.com Mon Apr 23 08:45:48 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 23 Apr 2012 08:45:48 +0200 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: On Mon, Apr 23, 2012 at 8:42 AM, wrote: > Hi Maciej, > > > Quick question - if it's mature, why not merge it to default? I presume it >> should be turned off, since there is a sizeable dependency, but still >> having it in default can be good. >> > > the dependency is the main issue: for it to build, it requires headers and > libs from Reflex. Completely off (i.e. the module not picked up) could > work: > there are only a few changes that fall outside of the cppyy module proper > (mainly access to the raw internals of arrays and a rule for .cxx in the > generated Makefile). However, I'm not sure what the advantage would be, as > the branch is regularly kept up to date with merges from default? > > Best regards, > Wim > > -- > WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net > It'll at the very least run tests nightly so we'll make sure we don't break it (we can install reflex on tannit). It's easier to maintain and I prefer done features on trunk -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Mon Apr 23 08:50:56 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Sun, 22 Apr 2012 23:50:56 -0700 (PDT) Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: Hi Maciej, > It'll at the very least run tests nightly so we'll make sure we don't break > it (we can install reflex on tannit). that, to me, sounds great! Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From alexander.pyattaev at tut.fi Mon Apr 23 10:33:44 2012 From: alexander.pyattaev at tut.fi (Alexander Pyattaev) Date: Mon, 23 Apr 2012 11:33:44 +0300 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: <1603656.jQ5RQKSfZF@hunter-laptop> An important question from userland - would it be benefitial to switch from SWIG to the new interface? When can that be done? Right now the main problem with SWIG is the object ownership, i.e. with pypy one has to set thisown=False for all objects to avoid crashes related to GC code. The problem occurs only for long-living objects though. Still, the point is that a new C++ interface is a cool thing to have, especially if it is native to pypy. Cant wait to test it out on a huge c++ extension PS: I would have gladly tested an alpha-beta quality version of the lib (i have some unittests for SWIG bindings, so they should work with new lib also i suppose), but I can not build pypy from sources, not enuf RAM =) Maybe someone could send me a build with new feature for testing? -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Mon Apr 23 10:52:04 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 23 Apr 2012 10:52:04 +0200 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: <1603656.jQ5RQKSfZF@hunter-laptop> References: <1603656.jQ5RQKSfZF@hunter-laptop> Message-ID: 2012/4/23 Alexander Pyattaev > Right now the main problem with SWIG is the object ownership, i.e. with > pypy one has to set thisown=False for all objects to avoid crashes related > to GC code. The problem occurs only for long-living objects though. Do you have a sample code for this issue? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From paniq at paniq.org Mon Apr 23 11:17:07 2012 From: paniq at paniq.org (Leonard Ritter) Date: Mon, 23 Apr 2012 11:17:07 +0200 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: (sorry, that was also supposed to go to the ML) Awesome work, keep it up! But: I showed the reflex library you're using to friends and they both suggested clang to parse C++ headers instead. What do you think about that? Cheers, Leonard On Mon, Apr 23, 2012 at 7:08 AM, wrote: > Hi, > > as detailed in a couple of blog posts in the past (*), for some time now we > have been working on making C++ bindings through the Reflex package > available > on PyPy, in the form of the "cppyy" module. Software is never done, and that > is true also in this case, but it has reached a level of maturity at which > it > can be said to be usable. Initial docs are now up on: > > ? http://doc.pypy.org/en/latest/cppyy.html > > with instructions on how to setup the reflex-support branch and how to test > it out. The documentation will be updated with more advanced and detailed > topics in the next couple of weeks or so. > > There's a sizable (non-PyPy) set of unit tests that still need to be worked > through, thus development will steadily continue at its current pace. Still, > if you find that cppyy almost works for you except for a particular feature, > feel free to ask for it to be prioritized. > > The biggest obstacle for most people (that are not in the field of HEP) will > be the current set of dependencies. Although the dependency set for cppyy is > really only Reflex, which could be distributed separately, for the CPython > equivalent code, the dependency set is a large portion of the ROOT class > library. The medium-term plan then is to use Cling, which is based on CLang > from llvm (http://root.cern.ch/drupal/content/cling) as the back-end. For > this, there will be a PyCling on CPython, and all as stand-alone projects to > remove the large dependency set. > > More interesting though, having Cling on the back of the bindings allows > far more advanced features, such as dynamic setup of callbacks and cross > language inheritance to put Python code into C++ frameworks. This has been > prototyped successfully in Clings predecessor (CINT), but would be very hard > to do in Reflex. > > Of course, an important reason for pushing this code out to the community > somewhat early, is that it allows anyone so interested to start hacking on > it and help shape it! > > Best regards, > ? ? ? ? ? Wim > > (*) > http://morepypy.blogspot.com/2011/08/wrapping-c-libraries-with-reflection.html > > ?http://morepypy.blogspot.com/2010/07/cern-sprint-report-wrapping-c-libraries.html > -- > WLavrijsen at lbl.gov ? ?-- ? ?+1 (510) 486 6411 ? ?-- ? ?www.lavrijsen.net > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From paniq at paniq.org Mon Apr 23 11:21:36 2012 From: paniq at paniq.org (Leonard Ritter) Date: Mon, 23 Apr 2012 11:21:36 +0200 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: Addendum: apparently, clang even provides python bindings. http://eli.thegreenplace.net/2011/07/03/parsing-c-in-python-with-clang/ On Mon, Apr 23, 2012 at 11:17 AM, Leonard Ritter wrote: > (sorry, that was also supposed to go to the ML) > > Awesome work, keep it up! > > But: I showed the reflex library you're using to friends and they both > suggested clang to parse C++ headers instead. What do you think about > that? > > Cheers, > Leonard > > On Mon, Apr 23, 2012 at 7:08 AM, ? wrote: >> Hi, >> >> as detailed in a couple of blog posts in the past (*), for some time now we >> have been working on making C++ bindings through the Reflex package >> available >> on PyPy, in the form of the "cppyy" module. Software is never done, and that >> is true also in this case, but it has reached a level of maturity at which >> it >> can be said to be usable. Initial docs are now up on: >> >> ? http://doc.pypy.org/en/latest/cppyy.html >> >> with instructions on how to setup the reflex-support branch and how to test >> it out. The documentation will be updated with more advanced and detailed >> topics in the next couple of weeks or so. >> >> There's a sizable (non-PyPy) set of unit tests that still need to be worked >> through, thus development will steadily continue at its current pace. Still, >> if you find that cppyy almost works for you except for a particular feature, >> feel free to ask for it to be prioritized. >> >> The biggest obstacle for most people (that are not in the field of HEP) will >> be the current set of dependencies. Although the dependency set for cppyy is >> really only Reflex, which could be distributed separately, for the CPython >> equivalent code, the dependency set is a large portion of the ROOT class >> library. The medium-term plan then is to use Cling, which is based on CLang >> from llvm (http://root.cern.ch/drupal/content/cling) as the back-end. For >> this, there will be a PyCling on CPython, and all as stand-alone projects to >> remove the large dependency set. >> >> More interesting though, having Cling on the back of the bindings allows >> far more advanced features, such as dynamic setup of callbacks and cross >> language inheritance to put Python code into C++ frameworks. This has been >> prototyped successfully in Clings predecessor (CINT), but would be very hard >> to do in Reflex. >> >> Of course, an important reason for pushing this code out to the community >> somewhat early, is that it allows anyone so interested to start hacking on >> it and help shape it! >> >> Best regards, >> ? ? ? ? ? Wim >> >> (*) >> http://morepypy.blogspot.com/2011/08/wrapping-c-libraries-with-reflection.html >> >> ?http://morepypy.blogspot.com/2010/07/cern-sprint-report-wrapping-c-libraries.html >> -- >> WLavrijsen at lbl.gov ? ?-- ? ?+1 (510) 486 6411 ? ?-- ? ?www.lavrijsen.net >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev From alex.pyattaev at gmail.com Mon Apr 23 11:26:54 2012 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Mon, 23 Apr 2012 12:26:54 +0300 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: <1603656.jQ5RQKSfZF@hunter-laptop> Message-ID: <6784092.e6bzhdXAq3@hunter-laptop> Well, I have never been able to isolate it, and the project where it is triggered is rather close-sourced. long story short - make up a SWIG object inside a function, pass it as a pointer to the swig-wrapped function, then call gc.collect(). After that swig calls object's destructor and if the C code is using it, youre screwd. I am not sure if the bug persists on a small scale, however, it only shows during long simulation runs where thousands of objects like that are circulated. If it ever happens on a smaller case I'll let you know. Alex maanantai 23 huhtikuu 2012 10:52:04 Amaury Forgeot d'Arc kirjoitti: 2012/4/23 Alexander Pyattaev Right now the main problem with SWIG is the object ownership, i.e. with pypy one has to set thisown=False for all objects to avoid crashes related to GC code. The problem occurs only for long-living objects though. Do you have a sample code for this issue? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Mon Apr 23 16:19:39 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Mon, 23 Apr 2012 07:19:39 -0700 (PDT) Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: Message-ID: Leonard, > Addendum: apparently, clang even provides python bindings. > http://eli.thegreenplace.net/2011/07/03/parsing-c-in-python-with-clang/ yes, but AFAIK it's C only and in any case, those are bindings to CLang, rather than bindings to the code parsed from CLang. You'll run into the same problems in their use from PyPy just as any other extension library, and you'd still have to build up the bindings to user code from those bindings to CLang (which is where the major work resides). > But: I showed the reflex library you're using to friends and they both > suggested clang to parse C++ headers instead. What do you think about > that? As said in the mail and the docs: the medium term is to use Cling, which is based on CLang. The difference is the C++ interactivity, which is a better match for Python to allow build-up of callbacks and cross-language inheritance. We started out with Reflex b/c that was a known quantity. Also, for large C++ libraries across many different projects (as we have to deal with), there is nothing in CLang that allows you to cleanly pre-package reflection info like is done with shared libraries in Reflex. For Cling, a method is being developed based on pre-compiled headers. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From wlavrijsen at lbl.gov Mon Apr 23 16:52:49 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Mon, 23 Apr 2012 07:52:49 -0700 (PDT) Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: <1603656.jQ5RQKSfZF@hunter-laptop> References: <1603656.jQ5RQKSfZF@hunter-laptop> Message-ID: Hi Alexander, > An important question from userland - would it be benefitial to switch from > SWIG to the new interface? When can that be done? there are no many major differences between SWIG and this on the Python side (nothing that can't be captured in a compatibility layer). I'm thinking of such things like cvar that is needed in SWIG, but not in cppyy, and the use of templates which need to be named in the .i's in SWIG but are have a more "instantiable" syntax in cppyy (for when the Cling backend becomes available). Also, Reflex can parse more code than can SWIG, so you'd have to be careful in which direction you use things. > Right now the main problem with SWIG is the object ownership, i.e. with pypy > one has to set thisown=False for all objects to avoid crashes related to GC > code. The problem occurs only for long-living objects though. The GC and cppyy work fine for those objects clearly owned by PyPy. Of course, if the C++ side takes ownership, that would need to be patched up "by hand" or with a rule (if the C++ API is consistent). > PS: I would have gladly tested an alpha-beta quality version of the lib (i > have some unittests for SWIG bindings, so they should work with new lib also > i suppose), but I can not build pypy from sources, not enuf RAM =) Maybe > someone could send me a build with new feature for testing? What distro (in particular, 32b or 64b and which version of libssl)? I've never distributed a binary version of PyPy, but could give it a try. Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From anto.cuni at gmail.com Mon Apr 23 17:30:37 2012 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 23 Apr 2012 17:30:37 +0200 Subject: [pypy-dev] output readable c In-Reply-To: References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> <75689738F1C14BB28CD95C62627B358B@vSHliutaotao> <20372.38313.972033.184727@montanaro.dyndns.org> Message-ID: <4F95759D.8030208@gmail.com> Hi Bookaa, On 04/23/2012 08:19 AM, Armin Rigo wrote: > > Bookaa, the person to do that can be you. In that case you need to > learn about Mercurial version control and the http://bitbucket.org > repository. I would recommend that you register on bitbucket, and > create your own fork of "pypy/pypy" to play with. If you don't want > to do that, I fear that your code will remain unaccepted, unless > someone else jumps in and does it. (In all cases, in a fork, please.) just an additional suggestion: even if you do a fork of pypy, make sure to do your work inside a named branch (e.g. "better-c-sources" or something like that). This way it's much easier for us to pull your code into the main pypy repo and run all the tests using our buildbot. ciao, Anto From alex.pyattaev at gmail.com Mon Apr 23 20:42:01 2012 From: alex.pyattaev at gmail.com (Alex Pyattaev) Date: Mon, 23 Apr 2012 21:42:01 +0300 Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: References: <1603656.jQ5RQKSfZF@hunter-laptop> Message-ID: <3330162.lhSaeAqb8d@hunter-laptop> Distro is latest gentoo, amd64 with sse2 and sse3 (core i5). libssl is in 2 versions: dev-libs/openssl-0.9.8u:0.9.8 dev-libs/openssl-1.0.0h:0 so pick one that suits you better. Anyway, default binary pypy works fine. Currently i have only one typemap in my *.i file for swig, and it allows PyObject* pointers to go through to c++ side. Otherwise no special features are used, so migration should be pretty painless. And of course if there are bugs they would most likely arise, especially if they are in memory management, because the c++ code is executed millions of times during the program run. BR, Alex PS: sorry for using 2 emails - one is work other home, screwd company policy. maanantai 23 huhtikuu 2012 07:52:49 wlavrijsen at lbl.gov kirjoitti: > Hi Alexander, > > > An important question from userland - would it be benefitial to switch > > from > > SWIG to the new interface? When can that be done? > > there are no many major differences between SWIG and this on the Python side > (nothing that can't be captured in a compatibility layer). I'm thinking of > such things like cvar that is needed in SWIG, but not in cppyy, and the use > of templates which need to be named in the .i's in SWIG but are have a more > "instantiable" syntax in cppyy (for when the Cling backend becomes > available). Also, Reflex can parse more code than can SWIG, so you'd have > to be careful in which direction you use things. > > > Right now the main problem with SWIG is the object ownership, i.e. with > > pypy one has to set thisown=False for all objects to avoid crashes > > related to GC code. The problem occurs only for long-living objects > > though. > > The GC and cppyy work fine for those objects clearly owned by PyPy. Of > course, if the C++ side takes ownership, that would need to be patched up > "by hand" or with a rule (if the C++ API is consistent). > > > PS: I would have gladly tested an alpha-beta quality version of the lib (i > > have some unittests for SWIG bindings, so they should work with new lib > > also i suppose), but I can not build pypy from sources, not enuf RAM =) > > Maybe someone could send me a build with new feature for testing? > > What distro (in particular, 32b or 64b and which version of libssl)? I've > never distributed a binary version of PyPy, but could give it a try. > > Best regards, > Wim -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlavrijsen at lbl.gov Mon Apr 23 21:20:07 2012 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Mon, 23 Apr 2012 12:20:07 -0700 (PDT) Subject: [pypy-dev] cppyy: C++ bindings for PyPy In-Reply-To: <3330162.lhSaeAqb8d@hunter-laptop> References: <1603656.jQ5RQKSfZF@hunter-laptop> <3330162.lhSaeAqb8d@hunter-laptop> Message-ID: Hi Alex, > Distro is latest gentoo, amd64 with sse2 and sse3 (core i5). there's an older version of (Py)ROOT distributed with gentoo: http://packages.gentoo.org/package/sci-physics/root Do you want to use that package, or pick up a more recent version (doesn't matter for Reflex, I think, as it has been mostly stable)? The pypy binary below is against 5.32, as I had it available. > libssl is in 2 versions: > dev-libs/openssl-0.9.8u:0.9.8 > dev-libs/openssl-1.0.0h:0 > so pick one that suits you better. Okay; the machine that I had a binary ready for and is probably closest (I'm not sure whether I can find an amd64 box, but Intel binary should do), has 0.9.8e, so that should work. First attempt, build against ROOT 5.32 (latest stable version): http://cern.ch/wlav/pypy-reflex-support-042312.tar.bz2 with md5sum: c6ae683605658fa43e0089a99c82c49b pypy-reflex-support-042312.tar.bz2 > Anyway, default binary pypy works fine. Different machines, different people, etc. :) This will be a little trial and error, I'm afraid, since I haven't distributed PyPy in binary before (for CERN, I simply installed binaries on the global file system, which makes it available to most institutes, by building it on a typical worker node). Btw., if there'll be a number of iterations necessary, then we can take this off the list. > Currently i have only one typemap in my *.i file for swig, and it allows > PyObject* pointers to go through to c++ side. Passing PyObject* is in PyROOT, not yet in cppyy, as I wasn't sure about its representation (I'll probably have to pass it through cpyext first to build an actual PyObject* with the guaranteed expected layout). It's on the TODO list, though. > so migration should be pretty painless. Also depends on the specific C++ features used of course. Actually, what I hope to gain from this exercise is more that cppyy gives proper and clear diagnostics if features are missing, rather than just crash. Furthermore, any answers to questions you may have can be turned into docs, so that's useful as well. > And of course if there are bugs they would most likely arise, especially if > they are in memory management, because the c++ code is executed millions of > times during the program run. What we do in the unit tests, is to call the gc explicitly and then see whether cleanup was done as expected (by having an instance counter on the C++ side). Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From Ronny.Pfannschmidt at gmx.de Tue Apr 24 18:15:39 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Tue, 24 Apr 2012 18:15:39 +0200 Subject: [pypy-dev] output readable c In-Reply-To: <4F95759D.8030208@gmail.com> References: <479D03ACB4BB4A97AAF506428EA8D8E8@vSHliutaotao> <6AD411BBEB23447BAC9290729ED7959D@vSHliutaotao> <75689738F1C14BB28CD95C62627B358B@vSHliutaotao> <20372.38313.972033.184727@montanaro.dyndns.org> <4F95759D.8030208@gmail.com> Message-ID: <4F96D1AB.4080506@gmx.de> somehow bookaa was removed from the recipient list, so a quick rehearsal with him added again Armin wrote: > Hi, > > On Mon, Apr 23, 2012 at 01:35, wrote: >> Maciej> I would really like this sort of changes to come with it's own >> Maciej> tests (ones that check what was compiled preferably). >> >> What might such tests look like? > It is an issue: unit-testing this kind of "detail" is a bit hard, I > agree. And the sheer size of the proposed new file is daunting: > unless I'm wrong it is *bigger* than any other file in translator/c/. > > There is still something we could do. For a start, making sure that > *all* tests pass everywhere, not just test_backendoptimized; including > a complete pypy translation, and including the fact that the > translated pypy seems to behave correctly (i.e. runs its own tests > fine). Then someone needs to make additional tests that stress all > branches in the new code --- additions in the same style as > test_backendoptimized, but written specifically to test uncommon paths > in your code. This is useful even if it only tests that the C > compiler is happy with the generated code and the generated code > behaves correctly. > > Bookaa, the person to do that can be you. In that case you need to > learn about Mercurial version control and the http://bitbucket.org > repository. I would recommend that you register on bitbucket, and > create your own fork of "pypy/pypy" to play with. If you don't want > to do that, I fear that your code will remain unaccepted, unless > someone else jumps in and does it. (In all cases, in a fork, please.) > > We cannot just take such a large file into the official pypy and be > happy. Fwiw we have a few pending bugs of the JIT optimizer's > unroller, which is another big piece of code full of "ifs" with > incomplete test coverage. I'm not harshly criticising, because in > that case the functionality (=speed) is great for the end user; but in > your situation you have to realize that it would be adding no > functionality at all for the end user, i.e. the user of PyPy. (I > don't consider making it easier to read the C code to be an additional > feature; if someone needs to read it, he is either busy chasing a > really obscure bug of the JIT or the GC (like me, about once a year), > or, more likely, he didn't properly test his source code.) > On 04/23/2012 05:30 PM, Antonio Cuni wrote: > Hi Bookaa, > > On 04/23/2012 08:19 AM, Armin Rigo wrote: >> >> Bookaa, the person to do that can be you. In that case you need to >> learn about Mercurial version control and the http://bitbucket.org >> repository. I would recommend that you register on bitbucket, and >> create your own fork of "pypy/pypy" to play with. If you don't want >> to do that, I fear that your code will remain unaccepted, unless >> someone else jumps in and does it. (In all cases, in a fork, please.) > > just an additional suggestion: even if you do a fork of pypy, make sure to do > your work inside a named branch (e.g. "better-c-sources" or something like > that). This way it's much easier for us to pull your code into the main pypy > repo and run all the tests using our buildbot. > > ciao, > Anto and as an additional note, i'm available on irc or gtalk as Ronny.Pfannschmidt at gmail.com to help with getting into hg and the workflows pypy uses for development/contribution -- Ronny From augycz at yahoo.com Tue Apr 24 01:46:40 2012 From: augycz at yahoo.com (augycz at yahoo.com) Date: 23 Apr 2012 20:46:40 -0300 Subject: [pypy-dev] [Spam] SR. pypy-dev Message-ID: <20120423204640.4AC9E623B8E0106B@yahoo.com> An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Fri Apr 27 00:35:44 2012 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 27 Apr 2012 01:35:44 +0300 Subject: [pypy-dev] windows and buildbots Message-ID: <4F99CDC0.5000509@gmail.com> An HTML attachment was scrubbed... URL: From joshsoares at postaweb.com Fri Apr 27 15:32:08 2012 From: joshsoares at postaweb.com (Josh Soares) Date: Fri, 27 Apr 2012 14:32:08 +0100 Subject: [pypy-dev] Placement/ Analysis/ Web Message-ID: <00733370.20120427143208@postaweb.com> pypy-dev at codespeak.net : Our company can put your web site in the best positions on the major search engines. We will give you a complimentary site analysis and show you how you can change your placement immediately. We have done this for thousands of companies. Ask us to prove it. And this won't cost a fortune. Reply to us for your web site quote. Don't forget to include how we can reach you. Sincerely, Josh Soares -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Apr 27 17:45:53 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 27 Apr 2012 17:45:53 +0200 Subject: [pypy-dev] windows and buildbots In-Reply-To: <4F99CDC0.5000509@gmail.com> References: <4F99CDC0.5000509@gmail.com> Message-ID: On Fri, Apr 27, 2012 at 12:35 AM, Matti Picus wrote: > I created a branch win32-cleanup on the buildbot repository with two > changes: > > - shell 9 is failing to run since it has the wrong slashes in the command, > see for instance > > > http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/508/steps/shell_9/logs/stdio > > - I propose removing the snakepit32 and bigboard windows buildbots. > snakepit32 has not been seen for a while, and bigboard has not been updated > for the 2GB memory cutoff so attempts to translate fail, and I assume both > have old versions of libexpat. > > I'm not sure who should make the call on the second change, so I sent it > here. > > Matti > No idea, can you ask amaury? -------------- next part -------------- An HTML attachment was scrubbed... URL: From timonator at perpetuum-immobile.de Sun Apr 29 01:49:00 2012 From: timonator at perpetuum-immobile.de (Timo Paulssen) Date: Sun, 29 Apr 2012 01:49:00 +0200 Subject: [pypy-dev] custom classes in applevel tests Message-ID: <4F9C81EC.5080906@perpetuum-immobile.de> Hello there, I was writing a few test cases for numpypy to make sure it behaves like numpy does when confronted with objects that have either an __int__, an __index__ or both (rule of thumb: index is prefered, but int is accepted in absence of index). Now, I wrote six tests that use three custom objects (one set for getitem, one set for setitem and one test each for int only, int and index and index only). I couldn't put those classes outside the applevel test (this is about pypy/module/micronumpy/test/test_numarray.py btw) because there's some kind of separation going on there. I *thought* the way to do it was to put it inside the applevel test class and have its name start with w_, but that wasn't it either. How do I make this work? Currently I have copies of the classes in each of the tests, but I'd like to clean this up. After this test cleanup, I think the branch numpypy-issue1137 is ready to be merged. Buildbot seems to agree. - Timo From timo at wakelift.de Sun Apr 29 01:45:42 2012 From: timo at wakelift.de (Timo Paulssen) Date: Sun, 29 Apr 2012 01:45:42 +0200 Subject: [pypy-dev] custom classes in applevel tests Message-ID: <4F9C8126.2080201@wakelift.de> Hello there, I was writing a few test cases for numpypy to make sure it behaves like numpy does when confronted with objects that have either an __int__, an __index__ or both (rule of thumb: index is prefered, but int is accepted in absence of index). Now, I wrote six tests that use three custom objects (one set for getitem, one set for setitem and one test each for int only, int and index and index only). I couldn't put those classes outside the applevel test (this is about pypy/module/micronumpy/test/test_numarray.py btw) because there's some kind of separation going on there. I *thought* the way to do it was to put it inside the applevel test class and have its name start with w_, but that wasn't it either. How do I make this work? Currently I have copies of the classes in each of the tests, but I'd like to clean this up. After this test cleanup, I think the branch numpypy-issue1137 is ready to be merged. Buildbot seems to agree. - Timo From benjamin at python.org Sun Apr 29 02:32:25 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 28 Apr 2012 20:32:25 -0400 Subject: [pypy-dev] custom classes in applevel tests In-Reply-To: <4F9C8126.2080201@wakelift.de> References: <4F9C8126.2080201@wakelift.de> Message-ID: 2012/4/28 Timo Paulssen : > Hello there, > > I was writing a few test cases for numpypy to make sure it behaves like > numpy does when confronted with objects that have either an __int__, an > __index__ or both (rule of thumb: index is prefered, but int is accepted in > absence of index). > > Now, I wrote six tests that use three custom objects (one set for getitem, > one set for setitem and one test each for int only, int and index and index > only). > > I couldn't put those classes outside the applevel test (this is about > pypy/module/micronumpy/test/test_numarray.py btw) because there's some kind > of separation going on there. I *thought* the way to do it was to put it > inside the applevel test class and have its name start with w_, but that > wasn't it either. Yes, this only works for functions. You probably want to use space.appexec in setup_class(). -- Regards, Benjamin From pjenvey at underboss.org Mon Apr 30 23:55:50 2012 From: pjenvey at underboss.org (Philip Jenvey) Date: Mon, 30 Apr 2012 14:55:50 -0700 Subject: [pypy-dev] custom classes in applevel tests In-Reply-To: References: <4F9C8126.2080201@wakelift.de> Message-ID: <6514C94F-8277-4452-A61B-EAD2B330E003@underboss.org> On Apr 28, 2012, at 5:32 PM, Benjamin Peterson wrote: > 2012/4/28 Timo Paulssen : >> Hello there, >> >> I was writing a few test cases for numpypy to make sure it behaves like >> numpy does when confronted with objects that have either an __int__, an >> __index__ or both (rule of thumb: index is prefered, but int is accepted in >> absence of index). >> >> Now, I wrote six tests that use three custom objects (one set for getitem, >> one set for setitem and one test each for int only, int and index and index >> only). >> >> I couldn't put those classes outside the applevel test (this is about >> pypy/module/micronumpy/test/test_numarray.py btw) because there's some kind >> of separation going on there. I *thought* the way to do it was to put it >> inside the applevel test class and have its name start with w_, but that >> wasn't it either. > > Yes, this only works for functions. You probably want to use > space.appexec in setup_class(). You can just make a w_ function that returns the class (better than an appexec blob). E.g. see get_MockRawIO in test_textio: https://bitbucket.org/pypy/pypy/src/55deb3442c05/pypy/module/_io/test/test_textio.py#cl-213 -- Philip Jenvey