From arigo at tunes.org Tue Jul 1 20:56:23 2014 From: arigo at tunes.org (Armin Rigo) Date: Tue, 1 Jul 2014 20:56:23 +0200 Subject: [pypy-dev] europython sprints? In-Reply-To: References: Message-ID: Hi, Latest news about the EuroPython sprint: you suddenly need to register that you're going to attend, and you need to do it *now*! The deadline is July 7th. See here: https://www.barcamptools.eu/europythonsprint2014?__l=en Don't be confused by the big title "EUROPYTHON": this is not the EuroPython web site. It's another organizer that is taking up this sprint, and you'll need to register on their web site as well before you can say that you'll attend the sprint. Thanks for Romain for pointing me to the recent EuroPython blog post about it. A bient?t, Armin. From mount.sarah at gmail.com Tue Jul 1 21:02:13 2014 From: mount.sarah at gmail.com (Sarah Mount) Date: Tue, 1 Jul 2014 20:02:13 +0100 Subject: [pypy-dev] europython sprints? In-Reply-To: References: Message-ID: Hi, On Tue, Jul 1, 2014 at 7:56 PM, Armin Rigo wrote: > Hi, > > Latest news about the EuroPython sprint: you suddenly need to register > that you're going to attend, and you need to do it *now*! The > deadline is July 7th. See here: > > https://www.barcamptools.eu/europythonsprint2014?__l=en > > Yup, but I think you guys also need to create a session proposal for a PyPy sprint... Cheers, Sarah -- Sarah Mount, Senior Lecturer, University of Wolverhampton website: http://www.snim2.org/ twitter: @snim2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Jul 1 21:07:43 2014 From: arigo at tunes.org (Armin Rigo) Date: Tue, 1 Jul 2014 21:07:43 +0200 Subject: [pypy-dev] europython sprints? In-Reply-To: References: Message-ID: Hi Sarah, On 1 July 2014 21:02, Sarah Mount wrote: > Yup, but I think you guys also need to create a session proposal for a PyPy > sprint... Added it now. I don't see a way to connect people to selected session proposals, though, so we can't know who is going to attend which sprint... A bient?t, Armin. From mount.sarah at gmail.com Tue Jul 1 21:13:26 2014 From: mount.sarah at gmail.com (Sarah Mount) Date: Tue, 1 Jul 2014 20:13:26 +0100 Subject: [pypy-dev] europython sprints? In-Reply-To: References: Message-ID: On 1 Jul 2014 20:08, "Armin Rigo" wrote: > > Hi Sarah, > > On 1 July 2014 21:02, Sarah Mount wrote: > > Yup, but I think you guys also need to create a session proposal for a PyPy > > sprint... > > Added it now. I don't see a way to connect people to selected session > proposals, though, so we can't know who is going to attend which > sprint... > Well, I had assumed we could all add ourselves, but I see now we can only vote for proposals. How confusing! Regards, Sarah -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmueller at python-academy.de Wed Jul 2 00:20:00 2014 From: mmueller at python-academy.de (=?ISO-8859-1?Q?Mike_M=FCller?=) Date: Wed, 02 Jul 2014 00:20:00 +0200 Subject: [pypy-dev] europython sprints? In-Reply-To: References: Message-ID: <53B33410.4040801@python-academy.de> Am 01.07.14 21:13, schrieb Sarah Mount: > > On 1 Jul 2014 20:08, "Armin Rigo" > wrote: >> >> Hi Sarah, >> >> On 1 July 2014 21:02, Sarah Mount > wrote: >> > Yup, but I think you guys also need to create a session proposal for a PyPy >> > sprint... >> >> Added it now. I don't see a way to connect people to selected session >> proposals, though, so we can't know who is going to attend which >> sprint... >> > > Well, I had assumed we could all add ourselves, but I see now we can only vote > for proposals. How confusing! This is a barcamp tool we (ab)use for sprints too. The main purpose is to get a head count for room and catering planning. As far as I know, you can register as participant of the sprints as such but not for a particular sprint. Voting for a proposal can serve a similar purpose though. BTW, the sprints are run by the EuroPython team but use a different website just for practical reason. Regards, Mike From yyc1992 at gmail.com Fri Jul 4 02:46:11 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Fri, 4 Jul 2014 08:46:11 +0800 Subject: [pypy-dev] How to get transparent proxy working. Message-ID: Hi, I found the transparent proxy in pypy recently. However, the example from the document doesn't seem to work here[1]. Many other builtin types (including tuple, dict, int) and the types that inherit from them does not work either. `object`, however, can be wrapped but doesn't seem to behave correctly either. (add doesn't work) I am using the latest version (as of a few hours ago at least) from bitbucket (28a1ebabc3e4) and I have recompiled it with --objspace-std-withtproxy just to make sure it is enabled. Is it broken or am I missing anything? Yichao Yu [1] http://nbviewer.ipython.org/gist/yuyichao/6e4ca22b9aa52bd02661 From arigo at tunes.org Fri Jul 4 13:23:42 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 4 Jul 2014 13:23:42 +0200 Subject: [pypy-dev] How to get transparent proxy working. In-Reply-To: References: Message-ID: Hi Yichao, On 4 July 2014 02:46, Yichao Yu wrote: > I found the transparent proxy in pypy recently. As you have noticed, this is an old feature not supported any more since a long time (and mostly untested). It was an experimental feature that turned out not to be useful in practice. Just using regular Python, you can mostly achieve the same effects, with the exception (mostly) of pretending to have an object of some built-in type for built-in function calls. I'm not sure what kind of objects can be wrapped right now... User-defined instances work, but as you found out, most special methods on them don't work. If you want to fix it, you're welcome. A bient?t, Armin. From yyc1992 at gmail.com Fri Jul 4 13:44:15 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Fri, 4 Jul 2014 19:44:15 +0800 Subject: [pypy-dev] How to get transparent proxy working. In-Reply-To: References: Message-ID: On Fri, Jul 4, 2014 at 7:23 PM, Armin Rigo wrote: > Hi Yichao, > > On 4 July 2014 02:46, Yichao Yu wrote: >> I found the transparent proxy in pypy recently. > > As you have noticed, this is an old feature not supported any more > since a long time (and mostly untested). It was an experimental > feature that turned out not to be useful in practice. Just using > regular Python, you can mostly achieve the same effects, with the > exception (mostly) of pretending to have an object of some built-in > type for built-in function calls. > > I'm not sure what kind of objects can be wrapped right now... > User-defined instances work, but as you found out, most special > methods on them don't work. If you want to fix it, you're welcome. IC.... Well, I am looking at this mainly because I thought it might be possible to use tproxy as a hack to fix this[1] numpypy issue (make a type with __getitem__ not iterable). (Or at least see what is the preferred way of interacting with object space.) If it is not working anymore, should it be at least reflected on the document? (given that even the example doesn't work anymore....) [1] https://bitbucket.org/pypy/numpy/issue/10/scalar-types-should-not-be-iterable#comment-10882437 Yichao Yu > > > A bient?t, > > Armin. From arigo at tunes.org Fri Jul 4 14:26:03 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 4 Jul 2014 14:26:03 +0200 Subject: [pypy-dev] How to get transparent proxy working. In-Reply-To: References: Message-ID: Hi Yichao, On 4 July 2014 13:44, Yichao Yu wrote: > If it is not working anymore, should it be at least reflected on the > document? (given that even the example doesn't work anymore....) Added a warning. Armin From anto.cuni at gmail.com Fri Jul 4 17:02:47 2014 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 4 Jul 2014 17:02:47 +0200 Subject: [pypy-dev] How to get transparent proxy working. In-Reply-To: References: Message-ID: Hi Armin, > As you have noticed, this is an old feature not supported any more > since a long time (and mostly untested). It was an experimental > feature that turned out not to be useful in practice. Just using > regular Python, you can mostly achieve the same effects, with the > exception (mostly) of pretending to have an object of some built-in > type for built-in function calls. > actually, ?jinja2? uses transparent proxy to create fictiious traceback chains: https://github.com/mitsuhiko/jinja2/blob/master/jinja2/debug.py ?if we decided that transparent proxies are not supported or deprecated, we should remove them and tell people to stop using them.? -------------- next part -------------- An HTML attachment was scrubbed... URL: From benzolius at yahoo.com Fri Jul 4 17:01:40 2014 From: benzolius at yahoo.com (Benedek Zoltan) Date: Fri, 4 Jul 2014 08:01:40 -0700 Subject: [pypy-dev] compiling with 4G ram Message-ID: <1404486100.95932.YahooMailNeo@web164706.mail.gq1.yahoo.com> Hi, Recently I tried to compile Pypy on a Linux x86_64 machine with only 4G RAM, according to the instructions: http://pypy.org/download.html I tried by CPython2.6 by the command: PYPY_GC_MAX_DELTA=200MB python2.6 --jit loop_longevity=300 ../../rpython/bin/rpython -Ojit targetpypystandalone I had to remove the --jit option and the compiling process uses all the ram. Is there a way to reduce the used memory amount, or I have no chance to try Pypy on this machine? Thanks Zoltan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Jul 4 18:36:47 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 4 Jul 2014 18:36:47 +0200 Subject: [pypy-dev] How to get transparent proxy working. In-Reply-To: References: Message-ID: maybe a fakeable traceback is a good idea nonetheless? (we can provide it directly from some __pypy__ module instead of a general mechanism) On Fri, Jul 4, 2014 at 5:02 PM, Antonio Cuni wrote: > Hi Armin, > > >> >> As you have noticed, this is an old feature not supported any more >> since a long time (and mostly untested). It was an experimental >> feature that turned out not to be useful in practice. Just using >> regular Python, you can mostly achieve the same effects, with the >> exception (mostly) of pretending to have an object of some built-in >> type for built-in function calls. > > > actually, jinja2 uses transparent proxy to create fictiious traceback > chains: > https://github.com/mitsuhiko/jinja2/blob/master/jinja2/debug.py > > if we decided that transparent proxies are not supported or deprecated, we > should remove them and tell people to stop using them. > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From arigo at tunes.org Fri Jul 4 22:42:39 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 4 Jul 2014 22:42:39 +0200 Subject: [pypy-dev] compiling with 4G ram In-Reply-To: <1404486100.95932.YahooMailNeo@web164706.mail.gq1.yahoo.com> References: <1404486100.95932.YahooMailNeo@web164706.mail.gq1.yahoo.com> Message-ID: Hi Benedek, On 4 July 2014 17:01, Benedek Zoltan wrote: > PYPY_GC_MAX_DELTA=200MB python2.6 --jit loop_longevity=300 > ../../rpython/bin/rpython -Ojit targetpypystandalone The "PYPY_GC_MAX_DELTA" and the "--jit" options only make sense for PyPy, not for CPython. These options are described here if you use PyPy itself to do the translation. If you have exactly 4 GB of RAM, and you don't have any PyPy to start with, then CPython is probably using just too much memory indeed. A bient?t, Armin. From yyc1992 at gmail.com Sat Jul 5 11:34:16 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Sat, 5 Jul 2014 17:34:16 +0800 Subject: [pypy-dev] Unable to build pypy with '--shared' option Message-ID: Hi, I have got the following error (see below, with a lot of other compiler warnings) when trying to compile pypy with the '--shared' option during the last stop (compile_c). I've seen similar issues without --shared before and it was solved by setting the CFLAGS to -O1 but that doesn't seem to help this time. Is it possible to fix this? Yichao Yu Compiler version: gcc 4.9.0 (gcc-multilibs from ArchLinux official repo). [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "app_main.py", line 75, in run_toplevel [translation:ERROR] File "/home/yuyichao/projects/mlinux/pkg/all/pypy-hg/src/pypy/rpython/translator/c/gcc/trackgcroot.py", line 2086, in [translation:ERROR] tracker.process(f, g, filename=fn) [translation:ERROR] File "/home/yuyichao/projects/mlinux/pkg/all/pypy-hg/src/pypy/rpython/translator/c/gcc/trackgcroot.py", line 1979, in process [translation:ERROR] tracker = parser.process_function(lines, filename) [translation:ERROR] File "/home/yuyichao/projects/mlinux/pkg/all/pypy-hg/src/pypy/rpython/translator/c/gcc/trackgcroot.py", line 1494, in process_function [translation:ERROR] table = tracker.computegcmaptable(self.verbose) [translation:ERROR] File "/home/yuyichao/projects/mlinux/pkg/all/pypy-hg/src/pypy/rpython/translator/c/gcc/trackgcroot.py", line 53, in computegcmaptable [translation:ERROR] self.parse_instructions() [translation:ERROR] File "/home/yuyichao/projects/mlinux/pkg/all/pypy-hg/src/pypy/rpython/translator/c/gcc/trackgcroot.py", line 215, in parse_instructions [translation:ERROR] self.find_missing_visit_method(opname) [translation:ERROR] File "/home/yuyichao/projects/mlinux/pkg/all/pypy-hg/src/pypy/rpython/translator/c/gcc/trackgcroot.py", line 245, in find_missing_visit_method [translation:ERROR] raise UnrecognizedOperation(opname) [translation:ERROR] UnrecognizedOperation: rex64 [translation:ERROR] make: *** [implement_1.gcmap] Error 1 [translation:ERROR] """) From arigo at tunes.org Sat Jul 5 11:40:08 2014 From: arigo at tunes.org (Armin Rigo) Date: Sat, 5 Jul 2014 11:40:08 +0200 Subject: [pypy-dev] Unable to build pypy with '--shared' option In-Reply-To: References: Message-ID: Hi Yichao, On 5 July 2014 11:34, Yichao Yu wrote: > I have got the following error (see below, with a lot of other > compiler warnings) when trying to compile pypy with the '--shared' > option during the last stop (compile_c). Ah, that's yet another asmgcc issue. Probably shows up with gcc 4.9. Can you send me privately (or on a site like http://bpaste.net/ ) the .s file given as argument to the call to trackgcroot.py? (It's the one that includes "rex64" in it.) A bient?t, Armin. From yyc1992 at gmail.com Sat Jul 5 11:55:24 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Sat, 5 Jul 2014 17:55:24 +0800 Subject: [pypy-dev] Unable to build pypy with '--shared' option In-Reply-To: References: Message-ID: Hi Armin, On Sat, Jul 5, 2014 at 5:40 PM, Armin Rigo wrote: > Hi Yichao, > > On 5 July 2014 11:34, Yichao Yu wrote: >> I have got the following error (see below, with a lot of other >> compiler warnings) when trying to compile pypy with the '--shared' >> option during the last stop (compile_c). > > Ah, that's yet another asmgcc issue. Probably shows up with gcc 4.9. > Can you send me privately (or on a site like http://bpaste.net/ ) the > .s file given as argument to the call to trackgcroot.py? (It's the > one that includes "rex64" in it.) I grep'ed the /tmp/usession-default-*/ directory and find 3 .s files with rex64 in it. Here is one of them[1], with the corresponding c file[2]. Hope these can help. =) Yichao Yu [1] http://bpaste.net/show/437121/ [2] http://bpaste.net/show/437127/ > > > A bient?t, > > Armin. From yyc1992 at gmail.com Sat Jul 5 13:12:10 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Sat, 5 Jul 2014 19:12:10 +0800 Subject: [pypy-dev] Unable to build pypy with '--shared' option In-Reply-To: References: Message-ID: Actually the problem is gone when adding option --gcrootfinder=shadowstack, which also seems to be necessary when building the py3k branch with --shared. I guess this is what you mean by "yet another asmgcc issue"? On Sat, Jul 5, 2014 at 5:55 PM, Yichao Yu wrote: > Hi Armin, > > On Sat, Jul 5, 2014 at 5:40 PM, Armin Rigo wrote: >> Hi Yichao, >> >> On 5 July 2014 11:34, Yichao Yu wrote: >>> I have got the following error (see below, with a lot of other >>> compiler warnings) when trying to compile pypy with the '--shared' >>> option during the last stop (compile_c). >> >> Ah, that's yet another asmgcc issue. Probably shows up with gcc 4.9. >> Can you send me privately (or on a site like http://bpaste.net/ ) the >> .s file given as argument to the call to trackgcroot.py? (It's the >> one that includes "rex64" in it.) > > I grep'ed the /tmp/usession-default-*/ directory and find 3 .s files > with rex64 in it. Here is one of them[1], with the corresponding c > file[2]. Hope these can help. =) > > Yichao Yu > > [1] http://bpaste.net/show/437121/ > [2] http://bpaste.net/show/437127/ > >> >> >> A bient?t, >> >> Armin. From anto.cuni at gmail.com Sat Jul 5 15:16:00 2014 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sat, 5 Jul 2014 15:16:00 +0200 Subject: [pypy-dev] compiling with 4G ram In-Reply-To: References: <1404486100.95932.YahooMailNeo@web164706.mail.gq1.yahoo.com> Message-ID: Hello Benedek, On Fri, Jul 4, 2014 at 10:42 PM, Armin Rigo wrote: > If you have exactly 4 GB of RAM, > and you don't have any PyPy to start with, then CPython is probably > using just too much memory indeed. > ??note that what Armin says applies only if you want to *translate* pypy by yourself. If you just want to try it, you can simply download a prebuilt binary from pypy.org or from this site, which offers portable binaries which are supposed to run on any linux distro: https://github.com/squeaky-pl/portable-pypy ciao, Anto -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Jul 5 17:33:13 2014 From: arigo at tunes.org (Armin Rigo) Date: Sat, 5 Jul 2014 17:33:13 +0200 Subject: [pypy-dev] Unable to build pypy with '--shared' option In-Reply-To: References: Message-ID: Hi Yichao, On 5 July 2014 13:12, Yichao Yu wrote: > Actually the problem is gone when adding option > --gcrootfinder=shadowstack, which also seems to be necessary when > building the py3k branch with --shared. > > I guess this is what you mean by "yet another asmgcc issue"? Yes. Here, as is often the case, the matter is easily resolved. I checked in 85672cabac67, which should fix it. A bient?t, Armin. From 4u5vjqpb at gmail.com Tue Jul 8 19:27:24 2014 From: 4u5vjqpb at gmail.com (John Smith) Date: Tue, 8 Jul 2014 13:27:24 -0400 Subject: [pypy-dev] Getting involved Message-ID: As a less objectionable alternative, what about a package which when passed code (string/code object/not sure) returns a newly compiled object with tail call optimization? J -------------- next part -------------- An HTML attachment was scrubbed... URL: From 4u5vjqpb at gmail.com Tue Jul 8 19:31:35 2014 From: 4u5vjqpb at gmail.com (John Smith) Date: Tue, 8 Jul 2014 13:31:35 -0400 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: FYI, that was supposed to be in response to: https://mail.python.org/pipermail/pypy-dev/2014-June/012546.html On Tue, Jul 8, 2014 at 1:27 PM, John Smith <4u5vjqpb at gmail.com> wrote: > > As a less objectionable alternative, what about a package which when > passed code (string/code object/not sure) returns a newly compiled object > with tail call optimization? > > J From fijall at gmail.com Tue Jul 8 20:35:49 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 8 Jul 2014 20:35:49 +0200 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: On Tue, Jul 8, 2014 at 7:27 PM, John Smith <4u5vjqpb at gmail.com> wrote: > As a less objectionable alternative, what about a package which when passed > code (string/code object/not sure) returns a newly compiled object with tail > call optimization? you can do that in pure python (it does not have to be a part of pypy in any sense) > > J > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From 4u5vjqpb at gmail.com Tue Jul 8 22:57:40 2014 From: 4u5vjqpb at gmail.com (John Smith) Date: Tue, 8 Jul 2014 16:57:40 -0400 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: Yes. But tail-call optimization is a performance optimization, so it goes well with PyPy. I wanted to suggest to Travis that he not be discouraged from his idea and give him another idea for getting tail-call more widely used compared to making his own fork of PyPy. On Tue, Jul 8, 2014 at 2:35 PM, Maciej Fijalkowski wrote: > On Tue, Jul 8, 2014 at 7:27 PM, John Smith <4u5vjqpb at gmail.com> wrote: > > As a less objectionable alternative, what about a package which when > passed > > code (string/code object/not sure) returns a newly compiled object with > tail > > call optimization? > > you can do that in pure python (it does not have to be a part of pypy > in any sense) > > > > > J > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Jul 9 09:17:02 2014 From: arigo at tunes.org (Armin Rigo) Date: Wed, 9 Jul 2014 09:17:02 +0200 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: Hi John, On 8 July 2014 22:57, John Smith <4u5vjqpb at gmail.com> wrote: > Yes. But tail-call optimization is a performance optimization, so it goes > well with PyPy. I wanted to suggest to Travis that he not be discouraged > from his idea and give him another idea for getting tail-call more widely > used compared to making his own fork of PyPy. Yes, that sounds like an idea --- which I wouldn't call a *good* idea because I still think that tail call optimization is a bad idea in Python, but that's a personal issue. If you want to implement such a pure Python compiler modification, you can; but it's a bit unclear what kind of alternate Python code or bytecode to emit. We could consider adding an experimental bytecode to the standard PyPy, even if it is normally never used, as long as it's extensively tested. Designing the bytecode correctly is an open question. A bient?t, Armin. From rymg19 at gmail.com Wed Jul 9 21:21:10 2014 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Wed, 9 Jul 2014 14:21:10 -0500 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: Well, if the stack still shows the function call, it isn't a bad idea. On Wed, Jul 9, 2014 at 2:17 AM, Armin Rigo wrote: > Hi John, > > On 8 July 2014 22:57, John Smith <4u5vjqpb at gmail.com> wrote: > > Yes. But tail-call optimization is a performance optimization, so it goes > > well with PyPy. I wanted to suggest to Travis that he not be discouraged > > from his idea and give him another idea for getting tail-call more widely > > used compared to making his own fork of PyPy. > > Yes, that sounds like an idea --- which I wouldn't call a *good* idea > because I still think that tail call optimization is a bad idea in > Python, but that's a personal issue. If you want to implement such a > pure Python compiler modification, you can; but it's a bit unclear > what kind of alternate Python code or bytecode to emit. We could > consider adding an experimental bytecode to the standard PyPy, even if > it is normally never used, as long as it's extensively tested. > Designing the bytecode correctly is an open question. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Ryan If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated." -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurie at tratt.net Thu Jul 10 13:12:24 2014 From: laurie at tratt.net (Laurence Tratt) Date: Thu, 10 Jul 2014 12:12:24 +0100 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: <20140710111224.GB22176@phase.tratt.net> On Wed, Jul 09, 2014 at 02:21:10PM -0500, Ryan Gonzalez wrote: > Well, if the stack still shows the function call, it isn't a bad idea. Tail call optimisation has trade-offs: one can always construct cases where it loses debugging information. If you'll permit the immodesty, one of my old blog articles gives one example of this [1]. Laurie [1] http://tratt.net/laurie/blog/entries/tail_call_optimization From 4u5vjqpb at gmail.com Thu Jul 10 15:39:25 2014 From: 4u5vjqpb at gmail.com (John Smith) Date: Thu, 10 Jul 2014 09:39:25 -0400 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: Hi Armin, I don't see why a new bytecode would be needed. I'm not sure exactly how Python functions are called, but this is based off what I know of X86 assembly and from what I gathered from Laurie's blog. Why not push the new arguments onto the stack and then jump back to the beginning of the function so as to reuse its stack frame? Which could all be done with the current bytecodes. >From a bigger picture perspective this and other compiler optimizations which change the traceback may be a good fit for use under the "-O" flag. Default behavior will be unchanged, and they can be added when wanted. Best, J Hi John, On 8 July 2014 22:57, John Smith <4u5vjqpb at gmail.com> wrote: > Yes. But tail-call optimization is a performance optimization, so it goes > well with PyPy. I wanted to suggest to Travis that he not be discouraged > from his idea and give him another idea for getting tail-call more widely > used compared to making his own fork of PyPy. Yes, that sounds like an idea --- which I wouldn't call a *good* idea because I still think that tail call optimization is a bad idea in Python, but that's a personal issue. If you want to implement such a pure Python compiler modification, you can; but it's a bit unclear what kind of alternate Python code or bytecode to emit. We could consider adding an experimental bytecode to the standard PyPy, even if it is normally never used, as long as it's extensively tested. Designing the bytecode correctly is an open question. A bient?t, Armin. From estama at gmail.com Thu Jul 10 15:52:11 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Thu, 10 Jul 2014 16:52:11 +0300 Subject: [pypy-dev] Getting involved In-Reply-To: References: Message-ID: <53BE9A8B.8070105@gmail.com> Hi all, and sorry for interjecting. Concerning the concept of leaving a function's stack in place. Can the same idea be used for generators? So when we yield (return) from a generator, we don't erase its stack but leave it in place to be reused for the next call to it. l. On 10/07/14 16:39, John Smith wrote: > Hi Armin, > > I don't see why a new bytecode would be needed. I'm not sure exactly > how Python functions are called, but this is based off what I know of > X86 assembly and from what I gathered from Laurie's blog. Why not push > the new arguments onto the stack and then jump back to the beginning > of the function so as to reuse its stack frame? Which could all be > done with the current bytecodes. > > From a bigger picture perspective this and other compiler > optimizations which change the traceback may be a good fit for use > under the "-O" flag. Default behavior will be unchanged, and they can > be added when wanted. > > Best, > J > > > Hi John, > > On 8 July 2014 22:57, John Smith <4u5vjqpb at gmail.com> wrote: >> Yes. But tail-call optimization is a performance optimization, so it goes >> well with PyPy. I wanted to suggest to Travis that he not be discouraged >> from his idea and give him another idea for getting tail-call more widely >> used compared to making his own fork of PyPy. > > Yes, that sounds like an idea --- which I wouldn't call a *good* idea > because I still think that tail call optimization is a bad idea in > Python, but that's a personal issue. If you want to implement such a > pure Python compiler modification, you can; but it's a bit unclear > what kind of alternate Python code or bytecode to emit. We could > consider adding an experimental bytecode to the standard PyPy, even if > it is normally never used, as long as it's extensively tested. > Designing the bytecode correctly is an open question. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From 4u5vjqpb at gmail.com Thu Jul 10 16:12:30 2014 From: 4u5vjqpb at gmail.com (John Smith) Date: Thu, 10 Jul 2014 10:12:30 -0400 Subject: [pypy-dev] Getting involved In-Reply-To: <53BE9A8B.8070105@gmail.com> References: <53BE9A8B.8070105@gmail.com> Message-ID: Hi Eleytherios, It looks to me like this is already done. Using the `dis` module and a simple generator you can see that POP_BLOCK is only called when the loop ends (which it never does in this example) but it remains after each value is yeilded (YIELD_VALUE). def f(): while True: yield 3 dis(f) 2 0 SETUP_LOOP 15 (to 18) >> 3 LOAD_GLOBAL 0 (True) 6 POP_JUMP_IF_FALSE 17 3 9 LOAD_CONST 1 (3) 12 YIELD_VALUE 13 POP_TOP 14 JUMP_ABSOLUTE 3 >> 17 POP_BLOCK >> 18 LOAD_CONST 0 (None) 21 RETURN_VALUE Best, J On Thu, Jul 10, 2014 at 9:52 AM, Eleytherios Stamatogiannakis wrote: > Hi all, and sorry for interjecting. > > Concerning the concept of leaving a function's stack in place. > > Can the same idea be used for generators? So when we yield (return) from a > generator, we don't erase its stack but leave it in place to be reused for > the next call to it. > > l. > > > On 10/07/14 16:39, John Smith wrote: >> >> Hi Armin, >> >> I don't see why a new bytecode would be needed. I'm not sure exactly >> how Python functions are called, but this is based off what I know of >> X86 assembly and from what I gathered from Laurie's blog. Why not push >> the new arguments onto the stack and then jump back to the beginning >> of the function so as to reuse its stack frame? Which could all be >> done with the current bytecodes. >> >> From a bigger picture perspective this and other compiler >> optimizations which change the traceback may be a good fit for use >> under the "-O" flag. Default behavior will be unchanged, and they can >> be added when wanted. >> >> Best, >> J >> >> >> Hi John, >> >> On 8 July 2014 22:57, John Smith <4u5vjqpb at gmail.com> wrote: >>> >>> Yes. But tail-call optimization is a performance optimization, so it goes >>> well with PyPy. I wanted to suggest to Travis that he not be discouraged >>> from his idea and give him another idea for getting tail-call more widely >>> used compared to making his own fork of PyPy. >> >> >> Yes, that sounds like an idea --- which I wouldn't call a *good* idea >> because I still think that tail call optimization is a bad idea in >> Python, but that's a personal issue. If you want to implement such a >> pure Python compiler modification, you can; but it's a bit unclear >> what kind of alternate Python code or bytecode to emit. We could >> consider adding an experimental bytecode to the standard PyPy, even if >> it is normally never used, as long as it's extensively tested. >> Designing the bytecode correctly is an open question. >> >> >> A bient?t, >> >> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From estama at gmail.com Thu Jul 10 19:22:43 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Thu, 10 Jul 2014 20:22:43 +0300 Subject: [pypy-dev] Getting involved In-Reply-To: References: <53BE9A8B.8070105@gmail.com> Message-ID: <53BECBE3.7040300@gmail.com> Wow i didn't knew that. Thanks for clarifying it :) . l. On 10/07/14 17:12, John Smith wrote: > Hi Eleytherios, > > It looks to me like this is already done. Using the `dis` module and a > simple generator you can see that POP_BLOCK is only called when the > loop ends (which it never does in this example) but it remains after > each value is yeilded (YIELD_VALUE). > > def f(): > while True: > yield 3 > > dis(f) > 2 0 SETUP_LOOP 15 (to 18) > >> 3 LOAD_GLOBAL 0 (True) > 6 POP_JUMP_IF_FALSE 17 > > 3 9 LOAD_CONST 1 (3) > 12 YIELD_VALUE > 13 POP_TOP > 14 JUMP_ABSOLUTE 3 > >> 17 POP_BLOCK > >> 18 LOAD_CONST 0 (None) > 21 RETURN_VALUE > > > Best, > J > > On Thu, Jul 10, 2014 at 9:52 AM, Eleytherios Stamatogiannakis > wrote: >> Hi all, and sorry for interjecting. >> >> Concerning the concept of leaving a function's stack in place. >> >> Can the same idea be used for generators? So when we yield (return) from a >> generator, we don't erase its stack but leave it in place to be reused for >> the next call to it. >> >> l. >> >> >> On 10/07/14 16:39, John Smith wrote: >>> >>> Hi Armin, >>> >>> I don't see why a new bytecode would be needed. I'm not sure exactly >>> how Python functions are called, but this is based off what I know of >>> X86 assembly and from what I gathered from Laurie's blog. Why not push >>> the new arguments onto the stack and then jump back to the beginning >>> of the function so as to reuse its stack frame? Which could all be >>> done with the current bytecodes. >>> >>> From a bigger picture perspective this and other compiler >>> optimizations which change the traceback may be a good fit for use >>> under the "-O" flag. Default behavior will be unchanged, and they can >>> be added when wanted. >>> >>> Best, >>> J >>> >>> >>> Hi John, >>> >>> On 8 July 2014 22:57, John Smith <4u5vjqpb at gmail.com> wrote: >>>> >>>> Yes. But tail-call optimization is a performance optimization, so it goes >>>> well with PyPy. I wanted to suggest to Travis that he not be discouraged >>>> from his idea and give him another idea for getting tail-call more widely >>>> used compared to making his own fork of PyPy. >>> >>> >>> Yes, that sounds like an idea --- which I wouldn't call a *good* idea >>> because I still think that tail call optimization is a bad idea in >>> Python, but that's a personal issue. If you want to implement such a >>> pure Python compiler modification, you can; but it's a bit unclear >>> what kind of alternate Python code or bytecode to emit. We could >>> consider adding an experimental bytecode to the standard PyPy, even if >>> it is normally never used, as long as it's extensively tested. >>> Designing the bytecode correctly is an open question. >>> >>> >>> A bient?t, >>> >>> Armin. >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Fri Jul 11 10:57:24 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 11 Jul 2014 10:57:24 +0200 Subject: [pypy-dev] Getting involved In-Reply-To: <20140710111224.GB22176@phase.tratt.net> References: <20140710111224.GB22176@phase.tratt.net> Message-ID: Hi Laurence, On 10 July 2014 13:12, Laurence Tratt wrote: > Tail call optimisation has trade-offs: one can always construct cases where > it loses debugging information. If you'll permit the immodesty, one of my old > blog articles gives one example of this [1]. A comment about this, if I can make it here: for languages like Python, there are two slightly different issues. The first is to use a constant amount of memory to do unbounded calls. This is difficult because of the stack trace issue you mention. However, a second and smaller issue would be to use a constant amount of *stack*, allowing a non-constant amount of *heap* to be used. If you want full tracebacks and full compatibility with Python, this would be a rather fine solution nowadays. Nobody cares much if there is a temporary chained-list of 100'000 objects in memory. This is what I had in mind above: add a TAIL_CALL bytecode that tries to allocate a new "Python Frame" object and attach it on the chained list, but then instead of recursively calling the interpreter, it would run the same-level interpreter with the newly allocated frame. This may be enough to make happy people with what I'm going to call the "tail calls! grudge". I would be hard-pressed to know if the result would be faster or slower, or if the JIT would need tweaks to perform better, but I'm certainly willing to let them try that (and help a bit). Now maybe the "tail calls! grudge" runs deeper and this solution would not be acceptable either. Such people would like a fully constant-memory recursion, and would actually *not* like full tracebacks, simply on the grounds that it doesn't help anybody to print a traceback of 100'000 entries or more. These are the people that occasionally go to python-dev and try to design what is basically language-level changes to support tail calls, only to be generally flatly rejected. If John Smith is in this category, then his ideas will be equally flatly rejected from PyPy --- with the exception that we might consider it if he at some point would like some core support from PyPy (like a new bytecode) in order to implement his ideas (e.g. in pure Python module that he could distribute on pip, which would give e.g. a decorator that hacks at the bytecode of its function.) A bient?t, Armin. From wizzat at gmail.com Fri Jul 11 18:11:21 2014 From: wizzat at gmail.com (Mark Roberts) Date: Fri, 11 Jul 2014 09:11:21 -0700 Subject: [pypy-dev] Getting involved In-Reply-To: References: <20140710111224.GB22176@phase.tratt.net> Message-ID: Does this mean that you cannot implement an interpreter for a language requiring TCO with PyPy/RPython? -Mark > On Jul 11, 2014, at 1:57, Armin Rigo wrote: > > Hi Laurence, > >> On 10 July 2014 13:12, Laurence Tratt wrote: >> Tail call optimisation has trade-offs: one can always construct cases where >> it loses debugging information. If you'll permit the immodesty, one of my old >> blog articles gives one example of this [1]. > > A comment about this, if I can make it here: for languages like > Python, there are two slightly different issues. The first is to use > a constant amount of memory to do unbounded calls. This is difficult > because of the stack trace issue you mention. However, a second and > smaller issue would be to use a constant amount of *stack*, allowing a > non-constant amount of *heap* to be used. > > If you want full tracebacks and full compatibility with Python, this > would be a rather fine solution nowadays. Nobody cares much if there > is a temporary chained-list of 100'000 objects in memory. This is > what I had in mind above: add a TAIL_CALL bytecode that tries to > allocate a new "Python Frame" object and attach it on the chained > list, but then instead of recursively calling the interpreter, it > would run the same-level interpreter with the newly allocated frame. > This may be enough to make happy people with what I'm going to call > the "tail calls! grudge". I would be hard-pressed to know if the > result would be faster or slower, or if the JIT would need tweaks to > perform better, but I'm certainly willing to let them try that (and > help a bit). > > Now maybe the "tail calls! grudge" runs deeper and this solution would > not be acceptable either. Such people would like a fully > constant-memory recursion, and would actually *not* like full > tracebacks, simply on the grounds that it doesn't help anybody to > print a traceback of 100'000 entries or more. These are the people > that occasionally go to python-dev and try to design what is basically > language-level changes to support tail calls, only to be generally > flatly rejected. If John Smith is in this category, then his ideas > will be equally flatly rejected from PyPy --- with the exception that > we might consider it if he at some point would like some core support > from PyPy (like a new bytecode) in order to implement his ideas (e.g. > in pure Python module that he could distribute on pip, which would > give e.g. a decorator that hacks at the bytecode of its function.) > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Fri Jul 11 18:17:37 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 11 Jul 2014 18:17:37 +0200 Subject: [pypy-dev] Getting involved In-Reply-To: References: <20140710111224.GB22176@phase.tratt.net> Message-ID: Hi Mark, On 11 July 2014 18:11, Mark Roberts wrote: > Does this mean that you cannot implement an interpreter for a language requiring TCO with PyPy/RPython? No, why? You write the interpreter you want. For example Pyrolog has TCO, as required by Prolog. It doesn't have guaranteed complete tracebacks, which is probably what Prolog people want anyway. A bient?t, Armin. From wizzat at gmail.com Fri Jul 11 18:45:33 2014 From: wizzat at gmail.com (Mark Roberts) Date: Fri, 11 Jul 2014 09:45:33 -0700 Subject: [pypy-dev] Getting involved In-Reply-To: References: <20140710111224.GB22176@phase.tratt.net> Message-ID: Thank you for the quick answer. Just making sure I was understanding which layer was being discussed. The addition of byte codes is what made me question if it was a deeper issue. -Mark > On Jul 11, 2014, at 9:17, Armin Rigo wrote: > > Hi Mark, > >> On 11 July 2014 18:11, Mark Roberts wrote: >> Does this mean that you cannot implement an interpreter for a language requiring TCO with PyPy/RPython? > > No, why? You write the interpreter you want. For example Pyrolog has > TCO, as required by Prolog. It doesn't have guaranteed complete > tracebacks, which is probably what Prolog people want anyway. > > > A bient?t, > > Armin. From laurie at tratt.net Sat Jul 12 16:47:41 2014 From: laurie at tratt.net (Laurence Tratt) Date: Sat, 12 Jul 2014 15:47:41 +0100 Subject: [pypy-dev] Getting involved In-Reply-To: References: <20140710111224.GB22176@phase.tratt.net> Message-ID: <20140712144741.GI25315@overdrive.tratt.net> On Fri, Jul 11, 2014 at 10:57:24AM +0200, Armin Rigo wrote: Hello Armin, > A comment about this, if I can make it here: for languages like Python, > there are two slightly different issues. The first is to use a constant > amount of memory to do unbounded calls. This is difficult because of the > stack trace issue you mention. However, a second and smaller issue would > be to use a constant amount of *stack*, allowing a non-constant amount of > *heap* to be used. Yes, that's a good point. My guess is that it won't keep the "we want tail calls optimised" folk happy. Because it uses arbitrary amounts of memory, it will be slower than stack-constant tail calls (if my experience with the old Converge VM is anything to go by). Assuming this would come with "turn the recursion limit off", it would also mean that anyone who writes an infinitely recursive function will probably hit death-by-swap before they've noticed ;) Given that, by typing "unlimit -s" I can allocate a 32MiB stack to processes, which gives a lot of room for Python-level recursion, I don't honestly know if allowing people to go much deeper will be much practical help. It won't hurt, of course (except for the death-by-swap thing). Laurie From techtonik at gmail.com Tue Jul 15 11:05:28 2014 From: techtonik at gmail.com (anatoly techtonik) Date: Tue, 15 Jul 2014 12:05:28 +0300 Subject: [pypy-dev] Access the name of variable that is being assigned Message-ID: Hi, Is it possible at all to define a class in Python that can read name of variable it is assigned to on init? >>> MyObject = SomeClass() >>> print(MyObject) 'MyObject' For this to work, SomeClass __init__ needs to know what variable name is currently waiting to be assigned. But at the time __init__ is executed, MyObject is not entered global or local space yet. I know that this is possible to do on AST level, but AST is inaccessible when program is running, so what is the corresponding structure to track that? (links to source are appreciated) 1. Is it possible to do this in CPython and PyPy? 2. Is it possible to do this in generic way? 3. Is there any stack of assignments? 3.1. Is this stack accessible? Thanks. -- anatoly t. From yyc1992 at gmail.com Tue Jul 15 11:50:37 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Tue, 15 Jul 2014 17:50:37 +0800 Subject: [pypy-dev] Access the name of variable that is being assigned In-Reply-To: References: Message-ID: On Tue, Jul 15, 2014 at 5:05 PM, anatoly techtonik wrote: > Hi, > > Is it possible at all to define a class in Python that > can read name of variable it is assigned to on init? > > >>> MyObject = SomeClass() > >>> print(MyObject) > 'MyObject' I thing in general a normal object in Python does not have a name and there's nothing special about the name of the variable it is assigned to first. To see why is is not going to work, what do you expect you print function to do if the object is created like some_function(SomeClass()) or some_other_object.some_attribute = SomeClass() or some_variable = another_variable = SomeClass() or some_variable = (SomeClass(),) or even SomeClass() # not assigning to anything etc.... I guess it would be better if you can describe what you really want to do. > > For this to work, SomeClass __init__ needs to know > what variable name is currently waiting to be > assigned. But at the time __init__ is executed, > MyObject is not entered global or local space yet. > > I know that this is possible to do on AST level, but > AST is inaccessible when program is running, so > what is the corresponding structure to track that? > (links to source are appreciated) > > 1. Is it possible to do this in CPython and PyPy? > 2. Is it possible to do this in generic way? > 3. Is there any stack of assignments? > 3.1. Is this stack accessible? > > Thanks. > -- > anatoly t. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From estama at gmail.com Tue Jul 15 18:37:02 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Tue, 15 Jul 2014 19:37:02 +0300 Subject: [pypy-dev] Dispatch on type In-Reply-To: References: Message-ID: <53C558AE.7050706@gmail.com> Hello, Is there any way to have fast dispatch based on the type of a variable? I'm talking about code of the form: t = type(var) if t is int: i(v) elif t is long: l(v) elif t is float: f(v) elif t is str: s(v) elif t is unicode: u(v) ... I have tried these ideas: - Having the types as keys in a dict and the functions as lambdas. - Creating a list from min(type_hashes) to max(type_hashes) (with lambdas as list values) and indexing in it with hash(var_type) - min(type_hashes) But both were slower than the multiple ifs. The ideal case would be to have an optimization like C/C++ compilers do to switch statements, where they would create a binary search over the multiple cases like below: Assuming that: hash(int) < hash(long) < hash(float) .... t=hash(type(var)) if t < hash(float): if t < hash(long): i(v) else: l(v) else: if t< hash(unicode): s(v) else: u(v) The problem in Python is that the order of type_hashes is not constant. So it is not possible to create the binary search code. Kind regards, l. From arigo at tunes.org Tue Jul 15 19:28:34 2014 From: arigo at tunes.org (Armin Rigo) Date: Tue, 15 Jul 2014 19:28:34 +0200 Subject: [pypy-dev] Dispatch on type In-Reply-To: <53C558AE.7050706@gmail.com> References: <53C558AE.7050706@gmail.com> Message-ID: Hi, On 15 July 2014 18:37, Eleytherios Stamatogiannakis wrote: > t = type(var) > if t is int: > i(v) > elif t is long: > l(v) > elif t is float: > f(v) > elif t is str: > s(v) > elif t is unicode: > u(v) > ... This should already give you the fastest possible execution on PyPy, because the first type inspection should promote the type in the JIT. All subsequent "if" checks are constant-folded. However, to be sure, you need to check with jitviewer. Note however that if all paths are eventually compiled by the JIT, the promotion will have a number of different cases, and searching through them is again done by linear search for now. This can be regarded as a bug waiting for improvement. A bient?t, Armin. From techtonik at gmail.com Tue Jul 15 19:55:34 2014 From: techtonik at gmail.com (anatoly techtonik) Date: Tue, 15 Jul 2014 20:55:34 +0300 Subject: [pypy-dev] Access the name of variable that is being assigned In-Reply-To: References: Message-ID: On Tue, Jul 15, 2014 at 12:50 PM, Yichao Yu wrote: > On Tue, Jul 15, 2014 at 5:05 PM, anatoly techtonik wrote: >> Hi, >> >> Is it possible at all to define a class in Python that >> can read name of variable it is assigned to on init? >> >> >>> MyObject = SomeClass() >> >>> print(MyObject) >> 'MyObject' > > I thing in general a normal object in Python does not have a name and > there's nothing special about the name of the variable it is assigned > to first. To see why is is not going to work, what do you expect you > print function to do if the object is created like > > some_function(SomeClass()) I don't need this case, so I can ignore it. But reliably detecting this to distinguish from other situations would be nice. > or > some_other_object.some_attribute = SomeClass() Detect that name is an attribute, handle distinctly if needed, for my purpose print 'some_other_object.some_attribute' > or > some_variable = another_variable = SomeClass() Print closest assigned variable name, i.e. 'another_variable' > or > some_variable = (SomeClass(),) There is no direct assignment. Don't need this. > or even > SomeClass() # not assigning to anything > etc.... Good to detect. Don't need. Actually all cases that I don't need are the same case on this one - there is no direct assignment to variable. > I guess it would be better if you can describe what you really want to do. I described. Or you need use case or user story? I think I want to link object instances to variable names without to print those names (if possible) in __repr__ and debug messages to save time on troubleshooting. From estama at gmail.com Tue Jul 15 19:11:17 2014 From: estama at gmail.com (Elefterios Stamatogiannakis) Date: Tue, 15 Jul 2014 20:11:17 +0300 Subject: [pypy-dev] Dispatch on type In-Reply-To: References: <53C558AE.7050706@gmail.com> Message-ID: <53C560B5.6040407@gmail.com> On 15/7/2014 8:28 ??, Armin Rigo wrote: > Hi, > > On 15 July 2014 18:37, Eleytherios Stamatogiannakis wrote: >> t = type(var) >> if t is int: >> i(v) >> elif t is long: >> l(v) >> elif t is float: >> f(v) >> elif t is str: >> s(v) >> elif t is unicode: >> u(v) >> ... > > This should already give you the fastest possible execution on PyPy, > because the first type inspection should promote the type in the JIT. > All subsequent "if" checks are constant-folded. However, to be sure, > you need to check with jitviewer. > > Note however that if all paths are eventually compiled by the JIT, the > promotion will have a number of different cases, and searching through > them is again done by linear search for now. This can be regarded as > a bug waiting for improvement. Above code gets hit millions of times with different variable types. So in our case all paths are compiled and we are linear. Another idea that i have is the following. At startup i could sort all the hash(types), create (in a string) a python method that does binary sorting and eval it. Would the JIT be able to handle eval gymnastics like that? Thank you. l. From william.leslie.ttg at gmail.com Wed Jul 16 00:17:55 2014 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Wed, 16 Jul 2014 08:17:55 +1000 Subject: [pypy-dev] Fwd: Access the name of variable that is being assigned In-Reply-To: References: Message-ID: Ack, resend because pypy-dev does not set reply-to ... On 16 July 2014 03:55, anatoly techtonik wrote: > On Tue, Jul 15, 2014 at 12:50 PM, Yichao Yu wrote: >> On Tue, Jul 15, 2014 at 5:05 PM, anatoly techtonik wrote: >> I guess it would be better if you can describe what you really want to do. > > I described. Or you need use case or user story? I think I want to link > object instances to variable names without to print those names > (if possible) in __repr__ and debug messages to save time on > troubleshooting. The most common practice is to pass some kind of debugging info to the constructor, otherwise, setting debug info on the instance is ok, too. my_object = SomeClass() my_object.debug_info = 'This is the typical case' You can also use the inspect module to grab the source line of the caller: http://codepad.org/YvStcEMv -- William Leslie Notice: Likely much of this email is, by the nature of copyright, covered under copyright law. You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement. From steve at pearwood.info Wed Jul 16 04:03:15 2014 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 16 Jul 2014 12:03:15 +1000 Subject: [pypy-dev] Access the name of variable that is being assigned In-Reply-To: References: Message-ID: <20140716020315.GF9112@ando> On Tue, Jul 15, 2014 at 5:05 PM, anatoly techtonik wrote: > Is it possible at all to define a class in Python that > can read name of variable it is assigned to on init? > > >>> MyObject = SomeClass() > >>> print(MyObject) > 'MyObject' This feature would be useful for things like namedtuple, where we currently have to write the name twice: record = namedtuple('record', 'a b c d') But I'm not sure why Anatoly is asking here. It would be a change in semantics of Python, and while I suppose it's possible for PyPy to lead the way with a semantic change for Python 3.5 or higher, or even an implementation-specific feature that other Python's don't offer, I would expect that normally this idea should go through CPython first. -- Steven From yyc1992 at gmail.com Wed Jul 16 05:54:16 2014 From: yyc1992 at gmail.com (Yichao Yu) Date: Wed, 16 Jul 2014 11:54:16 +0800 Subject: [pypy-dev] Access the name of variable that is being assigned In-Reply-To: <20140716020315.GF9112@ando> References: <20140716020315.GF9112@ando> Message-ID: I guess this feature is mainly useful for debugging since it is really hard to do it consistantly in python. And IMHO, for debuging, it might be more useful to record the position (file + line num etc) where the object is created, which has less ambiguity and can probably already be done by tracing back the call stack in the constructor (until the most derived constructor or even just record the whole call stack). For not writing the name of a named tuple twice, won't it be a better idea to just use it as a base class, i.e.: class TupleName(named_tuple_base('a', 'b', 'c', 'd')): pass or maybe even using the syntax and trick of the new Enum class introduced in python 3.4(? or 3.3?) class TupleName(TupleBase): a = 1 b = 1 IMHO, this fits the python syntax better (if the current one is not good enough :) ) Yichao Yu On Wed, Jul 16, 2014 at 10:03 AM, Steven D'Aprano wrote: > On Tue, Jul 15, 2014 at 5:05 PM, anatoly techtonik wrote: > >> Is it possible at all to define a class in Python that >> can read name of variable it is assigned to on init? >> >> >>> MyObject = SomeClass() >> >>> print(MyObject) >> 'MyObject' > > This feature would be useful for things like namedtuple, where we > currently have to write the name twice: > > record = namedtuple('record', 'a b c d') > > But I'm not sure why Anatoly is asking here. It would be a change in > semantics of Python, and while I suppose it's possible for PyPy to lead > the way with a semantic change for Python 3.5 or higher, or even an > implementation-specific feature that other Python's don't offer, I would > expect that normally this idea should go through CPython first. > > -- > Steven > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From bokr at oz.net Wed Jul 16 04:55:10 2014 From: bokr at oz.net (Bengt Richter) Date: Wed, 16 Jul 2014 04:55:10 +0200 Subject: [pypy-dev] Access the name of variable that is being assigned In-Reply-To: References: Message-ID: <53C5E98E.2080108@oz.net> Hi Anatoly, Haven't done any python in a loong time now, but I lurk sometimes on the pypy list, and thought ok, I'll play with that, and see if it's nay use to you. I wrote techtonik.py (included at the end) and used it interactively to show some features, first trying to approximate the interaction you gave as an example (well, except the ns. prefix ;-) I may have gone a little overboard, tracking re-assignments etc ;-) HTH On 07/15/2014 11:05 AM anatoly techtonik wrote: > Hi, > > Is it possible at all to define a class in Python that > can read name of variable it is assigned to on init? > > >>> MyObject = SomeClass() > >>> print(MyObject) > 'MyObject' > > For this to work, SomeClass __init__ needs to know > what variable name is currently waiting to be > assigned. But at the time __init__ is executed, > MyObject is not entered global or local space yet. > > I know that this is possible to do on AST level, but > AST is inaccessible when program is running, so > what is the corresponding structure to track that? > (links to source are appreciated) > > 1. Is it possible to do this in CPython and PyPy? > 2. Is it possible to do this in generic way? > 3. Is there any stack of assignments? > 3.1. Is this stack accessible? > > Thanks. > --- Python 2.7.3 (default, Jul 3 2012, 19:58:39) [GCC 4.7.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from techtonik import NameSetter, SomeClass >>> ns = NameSetter() >>> ns.MyObject = SomeClass() >>> print ns.MyObject 'MyObject' >>> # QED ;-) ... >>> ns.copy1 = ns.MyObject # new name for same object >>> ns.copy1 >>> print ns.MyObject 'MyObject', aka 'copy1' >>> ns.a = ns.b = ns.c = SomeClass('multi') >>> ns.a >>> print ns.b 'a', aka 'b', aka 'c' >>> ns.b = 123 >>> print ns.b 123 >>> print ns.a 'a', aka '\\b', aka 'c' >>> ns.a >>> nd.d = ns.a Traceback (most recent call last): File "", line 1, in NameError: name 'nd' is not defined >>> ns.d = ns.a >>> ns.d >>> ns.b = MyObject Traceback (most recent call last): File "", line 1, in NameError: name 'MyObject' is not defined >>> ns.b = ns.MyObject >>> ns.b >>> ns.a >>> ns.b = ns.a >>> ns.b >>> ns.MyObject >>> If you run techtonik.py, it runs a little test, whose output is: ---- [04:37 ~/wk/py]$ techtonik.py 'MyObject' 'MyObject', aka 'mycopy' 'myinstance' 'a', aka 'b', aka 'c' 10 20 'a', aka '\\b', aka 'c' [04:37 ~/wk/py]$ ---- Here is the techtonik.py source: ============================================ #!/usr/bin/python # debugging ideas for anatoly # 2014-07-16 00:54:06 # # The goal is to make something like this: # # >>> MyObject = SomeClass() # >>> print(MyObject) # 'MyObject' # # we can do it with an attribute name space: # if you can live with prefixing the name space # name to the names you want to track assignments of: class NameSetter(object): def __setattr__(self, name, val): # if our namespace already has the name being assigned ... if hasattr(self, name): # if that is a nametracking object like a SomeClass instance... tgt = getattr(self, name) if hasattr(tgt, '_names') and isinstance(tgt._names, list): # update its name list to reflect clobbering (prefix '\') for i,nm in enumerate(tgt._names): if nm==name: tgt._names[i] = '\\'+ nm # if value being assigned has a _names list attribute like SomeClass if hasattr(val, '_names') and isinstance(val._names, list): val._names.append(name) # add name to assignemt list # now store the value, whatever the type object.__setattr__(self, name, val) # avoid recursive loop class SomeClass(object): def __init__(self, *args, **kw): self.args = args self.kw = kw self._names = [] def __str__(self): return ', aka '.join(repr(s) for s in (self._names or ["(unassigned)"])) def __repr__(self): return '<%s obj: %r %r %s>' %( self.__class__.__name__, self.args, self.kw, 'assigned to: %s'%( ', '.join(repr(s) for s in (self._names or ["(unassigned)"])))) def test(): ns = NameSetter() ns.target = 'value' ns.MyObject = SomeClass(1,2,three=3) print ns.MyObject print repr(ns.MyObject) ns.mycopy = ns.MyObject print ns.MyObject print repr(ns.MyObject) ns.myinstance = SomeClass('same class new instance') print ns.myinstance print repr(ns.myinstance) ns.a = ns.b = ns.c = SomeClass('multi') print ns.a print repr(ns.b) ns.ten = 10 ns.b = 20 print ns.ten, ns.b, ns.a print repr(ns.a) if __name__ == '__main__': test() ====================================================== Have fun. Regards, Bengt Richter From cclauss at me.com Wed Jul 16 14:29:59 2014 From: cclauss at me.com (cclauss) Date: Wed, 16 Jul 2014 13:29:59 +0100 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: References: Message-ID: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> Hi Folks, The plan is to support Pypy on Heroku, but we need to have full libffi support before we move forward. As it stands, the following formulas produce a broken version of PyPy and Pypy3. Does anyone on this list have the required skills to suggest a working solution? https://github.com/heroku/heroku-buildpack-python/pull/154 and https://github.com/heroku/heroku-buildpack-python/pull/155 Thanks, CCC > On Jun 18, 2014, at 6:06, Alex Gaynor wrote: > > Hi Chris, > > Are you looking for an Infrastructure as a Service (something like AWS, or Rackspace Cloud) or a Platform as a Service (Heroku)? > > Typically IaaS providers just give you a bare linux box, where you can of course install your own PyPy; as you've seen it looks like the PyPy on Heroku is out of date. I think the easiest move is probably to look into writing a custom build pack for Heroku, if that's what you're interested in. > > Alex > > PS: Disclaimer, I work at Rackspace. > > >> On Tue, Jun 17, 2014 at 6:19 PM, cclauss wrote: >> Hi Folks, >> >> Currently Heroku only has Pypy 1.9 on an unsupported, experimental basis. >> >> https://devcenter.heroku.com/articles/python-runtimes#supported-python-runtimes >> and https://github.com/heroku/heroku-buildpack-python/issues/139 >> >> Other Cloud Foundry-based IaaS offering (IBM BlueMix, etc.) seem to use similar buildpacks. >> https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/Buildpacks >> >> Do any of you know of an Infrastructure as a Service provider that supports current versions of Pypy? >> >> Thanks for any pointers that you can provide. Chris Clauss >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev > > > > -- > "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Jul 16 15:50:30 2014 From: arigo at tunes.org (Armin Rigo) Date: Wed, 16 Jul 2014 15:50:30 +0200 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> Message-ID: Hi Chris, On 16 July 2014 14:29, cclauss wrote: > The plan is to support Pypy on Heroku, but we need to have full libffi > support before we move forward. What does this mean, exactly? You don't provide libffi on your build system, and so PyPy cannot be built there? A bient?t, Armin. From arigo at tunes.org Wed Jul 16 16:31:53 2014 From: arigo at tunes.org (Armin Rigo) Date: Wed, 16 Jul 2014 16:31:53 +0200 Subject: [pypy-dev] Dispatch on type In-Reply-To: <53C560B5.6040407@gmail.com> References: <53C558AE.7050706@gmail.com> <53C560B5.6040407@gmail.com> Message-ID: Hi, On 15 July 2014 19:11, Elefterios Stamatogiannakis wrote: > Above code gets hit millions of times with different variable types. So in > our case all paths are compiled and we are linear. > > Another idea that i have is the following. At startup i could sort all the > hash(types), create (in a string) a python method that does binary sorting > and eval it. Would the JIT be able to handle eval gymnastics like that? You need to try. There are far too many variations to be able to give a clear yes/no answer. For example, linear searches through only 6 items is incredibly fast anyway. But here's what I *think* should occur with your piece of code (untested!): t = type(x) Here, in this line, in order to get the application-level type, we need to know the exact RPython class of x. This is because the type is not always written explicitly: a Python-level int object, for example, is in RPython an instance of the class W_IntObject. We know that all instances of W_IntObject have the Python type 'int'; it doesn't need to be written explicitly as a field every time. So at the line above, there is promotion of the RPython class of x. Right now this is done with linear searching through all cases seen so far. If there are 5-6 different cases it's fine. (Note that RPython class != Python class in general, but for built-in types like int, str, etc. there is a one-to-one correspondence.) So at the line above, assuming that x is an instance of a built-in type, we end up with t being a constant already (a different one in each of the 5-6 paths). if t is int: ... elif t is long: ... In all the rest of the function, the "if... elif..." are constant-folded away. You don't gain anything by doing more complicated logic with t. A bient?t, Armin. From cclauss at me.com Thu Jul 17 00:21:00 2014 From: cclauss at me.com (cclauss) Date: Wed, 16 Jul 2014 23:21:00 +0100 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> Message-ID: Hi Armin, The way to think about cloud application platforms like Heroku, Bluemix, and Cloud Foundry is that they provide you with a working Linux box and little else. Therefore you use a "buildpack" (like an install script) to bundle up all your executable environment, libraries, and code so that it can be loaded and run on a remote Linux box. In the case of CPython that means that the buildpack needs to install a working CPython, setuptools, and pip, etc. and then look in your requirements.txt to find which pypi modules pip needs to install and then it launches your webapp (written to django, flask, bottle, etc.). To get Pypy to work in place of CPython, the buildpack would need to install a working Pypy, setuptools, and pip. Libffi is an essential precursor to having Pypy work properly. Some interesting work was done in https://github.com/mfenniak/heroku-buildpack-python-libffi/blob/master/bin/steps/libffi to get libffi working in a buildpack but my limited understanding does not allow me to take that further. It would be of interest to get Pypy working on these Platform-as-a-Service environments in place of CPython but it is beyond my limited understanding to actually make it happen. I hope to see your presentation at EuroPython. On 16 Jul 2014, at 14:50, Armin Rigo wrote: > Hi Chris, > > On 16 July 2014 14:29, cclauss wrote: >> The plan is to support Pypy on Heroku, but we need to have full libffi >> support before we move forward. > > What does this mean, exactly? You don't provide libffi on your build > system, and so PyPy cannot be built there? > > > A bient?t, > > Armin. From fijall at gmail.com Thu Jul 17 09:09:23 2014 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 17 Jul 2014 09:09:23 +0200 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> Message-ID: Note that CPython bundles libffi, so you just bundle libffi with pypy in a buildpack and be done. On Thu, Jul 17, 2014 at 12:21 AM, cclauss wrote: > Hi Armin, > > The way to think about cloud application platforms like Heroku, Bluemix, and Cloud Foundry is that they provide you with a working Linux box and little else. Therefore you use a "buildpack" (like an install script) to bundle up all your executable environment, libraries, and code so that it can be loaded and run on a remote Linux box. > > In the case of CPython that means that the buildpack needs to install a working CPython, setuptools, and pip, etc. and then look in your requirements.txt to find which pypi modules pip needs to install and then it launches your webapp (written to django, flask, bottle, etc.). > > To get Pypy to work in place of CPython, the buildpack would need to install a working Pypy, setuptools, and pip. Libffi is an essential precursor to having Pypy work properly. Some interesting work was done in https://github.com/mfenniak/heroku-buildpack-python-libffi/blob/master/bin/steps/libffi to get libffi working in a buildpack but my limited understanding does not allow me to take that further. > > It would be of interest to get Pypy working on these Platform-as-a-Service environments in place of CPython but it is beyond my limited understanding to actually make it happen. > > I hope to see your presentation at EuroPython. > > On 16 Jul 2014, at 14:50, Armin Rigo wrote: > >> Hi Chris, >> >> On 16 July 2014 14:29, cclauss wrote: >>> The plan is to support Pypy on Heroku, but we need to have full libffi >>> support before we move forward. >> >> What does this mean, exactly? You don't provide libffi on your build >> system, and so PyPy cannot be built there? >> >> >> A bient?t, >> >> Armin. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Thu Jul 17 10:44:57 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 17 Jul 2014 10:44:57 +0200 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> Message-ID: Hi, On 17 July 2014 09:09, Maciej Fijalkowski wrote: > Note that CPython bundles libffi, so you just bundle libffi with pypy > in a buildpack and be done. Also, I'm still not clear if a "bundle" contains all commands to recompile everything, or if it's just a binary package. Anyways, you can translate PyPy in a way that links to libffi statically instead of dynamically (I think it's the default, but I'm not sure). Then there is no libffi issue any more... I'm also quite unclear about why libffi in particular is the issue. As far as I know, on non-standard platforms you may have troubles compiling it, but on a standard Linux it should just work. If the only missing thing is knowledge about how to declare a "bundle" for Platform-as-service XYZ, then I'm afraid we can't help you on pypy-dev, but I don't see why making a "bundle" for PyPy would be easy but making a "bundle" for libffi not... And, anyway, as Maciej says you can put everything in the same bundle. It's not like the size of libffi (15KB?) matters. A bient?t, Armin. From yury at shurup.com Thu Jul 17 11:04:52 2014 From: yury at shurup.com (Yury V. Zaytsev) Date: Thu, 17 Jul 2014 11:04:52 +0200 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> Message-ID: <1405587892.2827.10.camel@newpride> On Thu, 2014-07-17 at 10:44 +0200, Armin Rigo wrote: > Also, I'm still not clear if a "bundle" contains all commands to > recompile everything, or if it's just a binary package. >From the github links posted by Chris it looks like a "bundle" is just a script that fetches a tarball with pre-compiled software from some trusted server and unpacks it into the local filesystem. In particular, the above-mentioned libffi bundle fetches a pre-compiled version of libffi from some random AWS account and the PyPy bundle fetches the pre-compiled binaries the PyPy bitbucket page. The problem is, I guess, that the PyPy binaries are dynamically linked against libffi, but installing libffi bundle will not help the matters, if it's not going to be added to the LD_LIBRARY_PATH for PyPy to see. So, it's either that, or else I would recommend trying this distribution out instead of the official PyPy binaries: https://github.com/squeaky-pl/portable-pypy It seems to include everything that can be reasonably bundled up in a single archive. Anyways, the buildpack moniker is misleading, isn't it? One would naively assume that it has something to do with building, whereas this doesn't seem to be the case. -- Sincerely yours, Yury V. Zaytsev From estama at gmail.com Thu Jul 17 13:32:27 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Thu, 17 Jul 2014 14:32:27 +0300 Subject: [pypy-dev] Dispatch on type In-Reply-To: References: <53C558AE.7050706@gmail.com> <53C560B5.6040407@gmail.com> Message-ID: <53C7B44B.5000204@gmail.com> On 16/07/14 17:31, Armin Rigo wrote: > Hi, > > On 15 July 2014 19:11, Elefterios Stamatogiannakis wrote: >> Above code gets hit millions of times with different variable types. So in >> our case all paths are compiled and we are linear. >> >> Another idea that i have is the following. At startup i could sort all the >> hash(types), create (in a string) a python method that does binary sorting >> and eval it. Would the JIT be able to handle eval gymnastics like that? > > You need to try. There are far too many variations to be able to give > a clear yes/no answer. For example, linear searches through only 6 > items is incredibly fast anyway. > > But here's what I *think* should occur with your piece of code (untested!): > > t = type(x) > > Here, in this line, in order to get the application-level type, we > need to know the exact RPython class of x. This is because the type > is not always written explicitly: a Python-level int object, for > example, is in RPython an instance of the class W_IntObject. We know > that all instances of W_IntObject have the Python type 'int'; it > doesn't need to be written explicitly as a field every time. So at > the line above, there is promotion of the RPython class of x. Right > now this is done with linear searching through all cases seen so far. Could this be made faster with binary search or jump tables, like what C++ compilers use to optimize switches? I also noticed that "if" ladders checking multiple "isinstance" happen a lot in Python's standard library. Maybe an optimization like that would generally improve the speed of PyPy ? > If there are 5-6 different cases it's fine. (Note that RPython class > != Python class in general, but for built-in types like int, str, etc. > there is a one-to-one correspondence.) > > So at the line above, assuming that x is an instance of a built-in > type, we end up with t being a constant already (a different one in > each of the 5-6 paths). > > if t is int: > ... > elif t is long: > ... > > In all the rest of the function, the "if... elif..." are > constant-folded away. You don't gain anything by doing more > complicated logic with t. > > > A bient?t, > > Armin. > From arigo at tunes.org Thu Jul 17 15:05:44 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 17 Jul 2014 15:05:44 +0200 Subject: [pypy-dev] Dispatch on type In-Reply-To: <53C7B44B.5000204@gmail.com> References: <53C558AE.7050706@gmail.com> <53C560B5.6040407@gmail.com> <53C7B44B.5000204@gmail.com> Message-ID: Hi, On 17 July 2014 13:32, Eleytherios Stamatogiannakis wrote: > Could this be made faster with binary search or jump tables, like what C++ > compilers use to optimize switches? Yes, it's a long-term plan we have to enable this in the JIT. Let me repeat again that for 6 items it's a bit unclear that it would be better, but it would definitely be an improvement if the number of cases grows larger. A bient?t, Armin. From alendit at googlemail.com Thu Jul 17 15:06:54 2014 From: alendit at googlemail.com (Dimitri Vorona) Date: Thu, 17 Jul 2014 15:06:54 +0200 Subject: [pypy-dev] Patchpoints in LLVM Message-ID: Hi, just wanted to link this discussion http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-October/066573.html and this piece of docs http://llvm.org/docs/StackMaps.html. As far as I understand, the lack of patchpoints (i.e. a way to patch dynamically generated code) was the major burden on the way to use LLVM for code generation for PyPy. The functionality was implemented as a part of FTL, LLVM-based JavaScript compiler (http://blog.llvm.org/2014/07/ftl-webkits-llvm-based-jit.html) Regards, Dimitri. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Jul 17 15:10:48 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 17 Jul 2014 15:10:48 +0200 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: <1405587892.2827.10.camel@newpride> References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> <1405587892.2827.10.camel@newpride> Message-ID: Hi Yury, On 17 July 2014 11:04, Yury V. Zaytsev wrote: > The problem is, I guess, that the PyPy binaries are dynamically linked > against libffi, but installing libffi bundle will not help the matters, > if it's not going to be added to the LD_LIBRARY_PATH for PyPy to see. Thanks for the explanations! And, just to be complete, may I ask how PyPy finds the other libraries it needs, and how in general project X finds library Y that it depends on? I still fail to see why we're discussing libffi specifically here... A bient?t, Armin. From yury at shurup.com Thu Jul 17 16:51:17 2014 From: yury at shurup.com (Yury V. Zaytsev) Date: Thu, 17 Jul 2014 16:51:17 +0200 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> <1405587892.2827.10.camel@newpride> Message-ID: <1405608677.2827.89.camel@newpride> On Thu, 2014-07-17 at 15:10 +0200, Armin Rigo wrote: > > Thanks for the explanations! And, just to be complete, may I ask how > PyPy finds the other libraries it needs, Excellent question for a PyPy developer; to be honest, I actually have very little idea :-) I guess it just adds the -l / -L / -I flags (discovered either by checking a pre-defined set of paths, or by running pkg-config) and hopes that all the right libraries are installed and headers are at the correct place, and blows up otherwise. It seems from a cursory grep that the Makefile writing logic is here: pypy/pypy/rpython/translator/platform Specifically, see posix.py and linux.py. > and how in general project X finds library Y that it depends on? The build systems usually try pkg-config, then a list of pre-defined locations and flags. Then a dummy executable is compiled, linked and run. If it works, everything is great, if not, it complains and/or disables the dependency. Now, this has to do with building, but as we've established, the buildpacks don't actually build anything, but rather unpack pre-compiled binaries. In this case, it's a dynamic linker thing. In brief, it looks for libraries in RPATH / LD_LIBRARY_PATH and standard locations. Does this make sense to you? > I still fail to see why we're discussing libffi specifically here... Actually, I'm not sure about that either :-) Maybe the machines provided by Heroku already have all other libraries that the binaries of PyPy that you provide via the buildbot are dynamically linked against, and so libffi is the only problematic one... -- Sincerely yours, Yury V. Zaytsev From arigo at tunes.org Thu Jul 17 23:18:20 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 17 Jul 2014 23:18:20 +0200 Subject: [pypy-dev] Patchpoints in LLVM In-Reply-To: References: Message-ID: Hi Dimitri, On 17 July 2014 15:06, Dimitri Vorona wrote: > http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-October/066573.html and this > piece of docs http://llvm.org/docs/StackMaps.html. As far as I understand, > the lack of patchpoints (i.e. a way to patch dynamically generated code) was > the major burden on the way to use LLVM for code generation for PyPy. As far as I understand, this is not a general way to patch dynamically generated code, but only a way to patch code as some precise point which must be immediately after a call. I don't see how it helps with a tracing JIT, where patching is needed to replace a failing guard (which is indeed a call) with a non-failing optimized piece of code (which no longer calls anything, and which must be compiled from the guard's position and not as a complete function). I may be missing something, of course. I'd however recommend anybody interested in digging more to be aware of http://pypy.readthedocs.org/en/latest/faq.html#could-we-use-llvm (and as far as I'm concerned I'll stick to the position explained here). A bient?t, Armin. From alex.gaynor at gmail.com Fri Jul 18 20:54:16 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 18 Jul 2014 11:54:16 -0700 Subject: [pypy-dev] Support for current versions of Pypy on Heroku and Cloud Foundry platforms? In-Reply-To: <1405608677.2827.89.camel@newpride> References: <0155CC52-DA05-46F7-B9C5-57385DE88611@me.com> <1405587892.2827.10.camel@newpride> <1405608677.2827.89.camel@newpride> Message-ID: It looks like Heroku will soon have native support for PyPy 2.3.1: https://github.com/heroku/heroku-buildpack-python/issues/139 Alex On Thu, Jul 17, 2014 at 7:51 AM, Yury V. Zaytsev wrote: > On Thu, 2014-07-17 at 15:10 +0200, Armin Rigo wrote: > > > > Thanks for the explanations! And, just to be complete, may I ask how > > PyPy finds the other libraries it needs, > > Excellent question for a PyPy developer; to be honest, I actually have > very little idea :-) > > I guess it just adds the -l / -L / -I flags (discovered either by > checking a pre-defined set of paths, or by running pkg-config) and hopes > that all the right libraries are installed and headers are at the > correct place, and blows up otherwise. > > It seems from a cursory grep that the Makefile writing logic is here: > > pypy/pypy/rpython/translator/platform > > Specifically, see posix.py and linux.py. > > > and how in general project X finds library Y that it depends on? > > The build systems usually try pkg-config, then a list of pre-defined > locations and flags. Then a dummy executable is compiled, linked and > run. If it works, everything is great, if not, it complains and/or > disables the dependency. > > Now, this has to do with building, but as we've established, the > buildpacks don't actually build anything, but rather unpack pre-compiled > binaries. In this case, it's a dynamic linker thing. In brief, it looks > for libraries in RPATH / LD_LIBRARY_PATH and standard locations. > > Does this make sense to you? > > > I still fail to see why we're discussing libffi specifically here... > > Actually, I'm not sure about that either :-) > > Maybe the machines provided by Heroku already have all other libraries > that the binaries of PyPy that you provide via the buildbot are > dynamically linked against, and so libffi is the only problematic one... > > -- > Sincerely yours, > Yury V. Zaytsev > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero GPG Key fingerprint: 125F 5C67 DFE9 4084 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kitizz.devside at gmail.com Mon Jul 21 13:04:53 2014 From: kitizz.devside at gmail.com (Kit Ham) Date: Mon, 21 Jul 2014 21:04:53 +1000 Subject: [pypy-dev] Question about PyPy Sandboxing In-Reply-To: References: Message-ID: Beautiful, thank you =) Kit On Tue, Jun 24, 2014 at 5:17 PM, Armin Rigo wrote: > Hi Kit, > > On 24 June 2014 02:42, Kit Ham wrote: > > Can multiple PyPy sandboxes be executed and interacted with from C/C++? > > Yes, you can in theory do whatever you want, but it is not really > finished and documented. > > You first need to decide if it's ok that each sandbox runs in a > separate process or if you would rather have an in-process solution > (more work). In the first case, you'd extend the "controller" > (regular Python code controlling the sandboxed subprocess) so that it > can handle several subprocesses at once. Then, for example, you'd run > the controller in CPython, and link to CPython from your C/C++ code > using the standard CPython ways. Or you can even rewrite the > controller in C/C++: it's just some code reading and writing to the > subprocesses' stdin/stdout. (Each subprocess is one sandboxed PyPy > instance.) > > > A bient?t, > > Armin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Jul 23 18:14:06 2014 From: arigo at tunes.org (Armin Rigo) Date: Wed, 23 Jul 2014 18:14:06 +0200 Subject: [pypy-dev] For py3k... Message-ID: Hi all, A module mentioned today in a EuroPython lightning talk: "lzma" reimplemented in cffi (compatible with the one from Python 3.3's stdlib). https://pypi.python.org/pypi/lzmaffi A bient?t, Armin. From p.j.a.cock at googlemail.com Wed Jul 23 19:20:03 2014 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Wed, 23 Jul 2014 18:20:03 +0100 Subject: [pypy-dev] For py3k... In-Reply-To: References: Message-ID: Thanks - that will be a useful companion to my backport for using lzma on C Python 2.6, 2.7 and early Python 3 :) https://pypi.python.org/pypi/backports.lzma Peter On Wed, Jul 23, 2014 at 5:14 PM, Armin Rigo wrote: > Hi all, > > A module mentioned today in a EuroPython lightning talk: "lzma" > reimplemented in cffi (compatible with the one from Python 3.3's > stdlib). > > https://pypi.python.org/pypi/lzmaffi > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From anto.cuni at gmail.com Thu Jul 24 19:26:12 2014 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 24 Jul 2014 18:26:12 +0100 Subject: [pypy-dev] wrong user mapped in the issue tracker Message-ID: Hi, I was looking at this issue: https://bitbucket.org/pypy/pypy/issue/1514/module-behaviour-incompatibility-which and I noticed that pypy's user "amaury" has been mapped to bitbucket's user "amaury", which unfortunately it's another physical person. I don't know much about the migration of the issue tracker, so I don't even know if it's possible and how easy/hard is to fix it, just wanted to point out. ciao, Anto -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Jul 25 10:24:41 2014 From: arigo at tunes.org (Armin Rigo) Date: Fri, 25 Jul 2014 10:24:41 +0200 Subject: [pypy-dev] libdynd Message-ID: Hi, Feedback, from Travis Oliphant at EuroPython: libdynd (https://github.com/ContinuumIO/libdynd) might be the longer-term future of NumPy, and it looks like it would be much more natural to bind to it from PyPy (via cffi). Worth a look I believe. It certainly looks to me like such a cffi binding would be much more user-friendly than numpypy, in the sense that missing functionality would be far easier to contribute back. A bient?t, Armin. From anto.cuni at gmail.com Sat Jul 26 11:19:40 2014 From: anto.cuni at gmail.com (Antonio Cuni) Date: Sat, 26 Jul 2014 11:19:40 +0200 Subject: [pypy-dev] libdynd In-Reply-To: References: Message-ID: Hi, this looks interesting, but from a quick look it seems they are only offering a C++ API? In that case, it might be better/easier to wrap it through cppyy than cffi. Also, did Travis told you what are the plans for scipy? On Fri, Jul 25, 2014 at 10:24 AM, Armin Rigo wrote: > Hi, > > Feedback, from Travis Oliphant at EuroPython: libdynd > (https://github.com/ContinuumIO/libdynd) might be the longer-term > future of NumPy, and it looks like it would be much more natural to > bind to it from PyPy (via cffi). Worth a look I believe. It > certainly looks to me like such a cffi binding would be much more > user-friendly than numpypy, in the sense that missing functionality > would be far easier to contribute back. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimaqq at gmail.com Mon Jul 28 17:57:46 2014 From: dimaqq at gmail.com (Dima Tisnek) Date: Mon, 28 Jul 2014 17:57:46 +0200 Subject: [pypy-dev] Cross-post from python-ideas: Compressing the stack on the fly In-Reply-To: References: Message-ID: A slightly detached view (please correct me I'm wrong): Most stack frames are populated with very few items compared to allocated block (validation needed) That's awesome for compression. However, most items on the stack are references, i.e. pointers to memory locations. Some of these repeat, e.g. `self`, though I would suspect for most interesting tasks (e.g. traversing a graph), most references are disparate, if not unique. Granted, unique pointers still typically have compressible prefix. IMO CPython has a potential to be a cache trasher -- smallest frame I could find is 448 bytes, the LCM to 1K is 7K, in other words 16 recursions and values are aligned in the same slots relative to 1024K, and your 1-way assoc. cache is done; twice that and 2-way assoc. cache is done. May be completely masked by other memory accesses these frame do though, so don't quote me on the performance hit here. I suggest you capture a few deep stacks in problems from different domains and present your findings. (*) I've no idea how pypy implments stack On 27 May 2013 12:39, Ram Rachum wrote: > Hi guys, > > I made a post on the Python-ideas mailing list that I was told might be > relevant to Pypy. I've reproduced the original email below. Here is the > thread on Python-ideas with all the discussion. > > -------------- > > Hi everybody, > > Here's an idea I had a while ago. Now, I'm an ignoramus when it comes to how > programming languages are implemented, so this idea will most likely be > either (a) completely impossible or (b) trivial knowledge. > > I was thinking about the implementation of the factorial in Python. I was > comparing in my mind 2 different solutions: The recursive one, and the one > that uses a loop. Here are example implementations for them: > > def factorial_recursive(n): > if n == 1: > return 1 > return n * factorial_recursive(n - 1) > > def factorial_loop(n): > result = 1 > for i in range(1, n + 1): > result *= i > return result > > I know that the recursive one is problematic, because it's putting a lot of > items on the stack. In fact it's using the stack as if it was a loop > variable. The stack wasn't meant to be used like that. > > Then the question came to me, why? Maybe the stack could be built to handle > this kind of (ab)use? > > I read about tail-call optimization on Wikipedia. If I understand correctly, > the gist of it is that the interpreter tries to recognize, on a > frame-by-frame basis, which frames could be completely eliminated, and then > it eliminates those. Then I read Guido's blog post explaining why he doesn't > want it in Python. In that post he outlined 4 different reasons why TCO > shouldn't be implemented in Python. > > But then I thought, maybe you could do something smarter than eliminating > individual stack frames. Maybe we could create something that is to the > current implementation of the stack what `xrange` is to the old-style > `range`. A smart object that allows access to any of a long list of items in > it, without actually having to store those items. This would solve the first > argument that Guido raises in his post, which I found to be the most > substantial one. > > What I'm saying is: Imagine the stack of the interpreter when it runs the > factorial example above for n=1000. It has around 1000 items in it and it's > just about to explode. But then, if you'd look at the contents of that > stack, you'd see it's embarrassingly regular, a compression algorithm's wet > dream. It's just the same code location over and over again, with a > different value for `n`. > > So what I'm suggesting is an algorithm to compress that stack on the fly. An > algorithm that would detect regularities in the stack and instead of saving > each individual frame, save just the pattern. Then, there wouldn't be any > problem with showing informative stack trace: Despite not storing every > individual frame, each individual frame could still be accessed, similarly > to how `xrange` allow access to each individual member without having to > store each of them. > > Then, the stack could store a lot more items, and tasks that currently > require recursion (like pickling using the standard library) will be able to > handle much deeper recursions. > > What do you think? > > > Ram. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From wayedt at gmail.com Tue Jul 29 17:45:10 2014 From: wayedt at gmail.com (Tyler Wade) Date: Tue, 29 Jul 2014 10:45:10 -0500 Subject: [pypy-dev] Adding support for __eq__ to RPython Message-ID: Hello, While working on my GSoC project to move PyPy?s unicode implementation to utf-8, I added support for __eq__ to RPython. It was suggested on IRC that this may be controversial and I should write to the mailing list for further feedback. I?ve pushed my changes to the rpython-__eq__ branch if you?d like to look at the code itself. My reason for adding __eq__ support (along with other magic methods) is so that Utf8Str can be used as a drop-in replacement for the built-in (RPython) unicode type as much as possible. There are a number of places in where the same code is used to handle strings and unicode objects and make use of the standard operators. Needing to create a special case for Utf8Strs in every such place would be rather painful. Note that to prevent confusion, classes are prevented from defining __eq__ if the class at the root of their hierarchy does not do so also. This avoids the case where casting to common base hides the presence of a __eq__ method. I didn?t add such a check for __mul__ and __add__ since the * and + operators aren?t defined on RPython instances anyway and thus won?t silently behave differently post-translation like == could. Thoughts? Thanks, Tyler From arigo at tunes.org Wed Jul 30 08:43:11 2014 From: arigo at tunes.org (Armin Rigo) Date: Wed, 30 Jul 2014 08:43:11 +0200 Subject: [pypy-dev] Adding support for __eq__ to RPython In-Reply-To: References: Message-ID: Hi Tyler, On 29 July 2014 17:45, Tyler Wade wrote: > While working on my GSoC project to move PyPy?s unicode implementation to utf-8, I added support for __eq__ to RPython. It was suggested on IRC that this may be controversial and I should write to the mailing list for further feedback. I?ve pushed my changes to the rpython-__eq__ branch if you?d like to look at the code itself. How much rewrites would be involved if you did instead: @specialize.argtype(0,1) def eq(x, y): if isinstance(x, Utf8Str): return x.eq(y) return x == y and used the global function eq() instead of the == operator where relevant? A bient?t, Armin. From numerodix at gmail.com Wed Jul 30 21:13:11 2014 From: numerodix at gmail.com (Martin Matusiak) Date: Wed, 30 Jul 2014 21:13:11 +0200 Subject: [pypy-dev] cppyy and out arguments Message-ID: Hi, I was reading about cppyy and I could not find a mention of what it does with out arguments. I also read through the unit tests, but I couldn't find any examples. Are they supported? I also found the section header "CPython" a little confusing on this page as the section doesn't seem to talk about cpython: pypy.readthedocs.org/en/latest/cppyy.html#cpython Martin From arigo at tunes.org Thu Jul 31 16:34:17 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 31 Jul 2014 16:34:17 +0200 Subject: [pypy-dev] cppyy and out arguments In-Reply-To: References: Message-ID: Hi Martin, On 30 July 2014 21:13, Martin Matusiak wrote: > I was reading about cppyy and I could not find a mention of what it > does with out arguments. I also read through the unit tests, but I > couldn't find any examples. Are they supported? A C++ "reference" type like "int &" is implemented like a C++ pointer type "int *" by most C++ compilers, as far as I know. My guess is that you simply have to pretend that "int &" is just "int *". > I also found the section header "CPython" a little confusing on this > page as the section doesn't seem to talk about cpython: > pypy.readthedocs.org/en/latest/cppyy.html#cpython It's about the difference between cppyy and PyCintex, which is an older CPython-only equivalent to cppyy. A bient?t, Armin. From estama at gmail.com Thu Jul 31 16:47:11 2014 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Thu, 31 Jul 2014 17:47:11 +0300 Subject: [pypy-dev] CFFI and UTF8 In-Reply-To: References: Message-ID: <53DA56EF.6060609@gmail.com> Hello, First of all, directly supporting UTF8 in CFFI has been discussed before. I'm bringing the same subject again because now PyPy aims to convert to using UTF8 internally by default. So the question is, will CFFI take advantage of that? Right now cffi_backend's "b_string" works with ASCII and widechar strings as input. This means that for UTF-8 input we need to first parse (via ffi.string) the char* as str (1st copy), and then convert it to UTF-8 (doing a 2nd copy?). Wouldn't it be faster to have a ffi.stringUTF8 for the case where we know the input is in UTF8? Ideally we could also have a ffi.stringUTF8const, which knowing that the char* is const (won't be changed by the C side), won't do a copy at all? Kind regards, l. From arigo at tunes.org Thu Jul 31 17:29:45 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 31 Jul 2014 17:29:45 +0200 Subject: [pypy-dev] libdynd In-Reply-To: References: Message-ID: Hi, On 26 July 2014 11:19, Antonio Cuni wrote: > this looks interesting, but from a quick look it seems they are only > offering a C++ API? > In that case, it might be better/easier to wrap it through cppyy than cffi. One or the other, yes. > Also, did Travis told you what are the plans for scipy? No. As far as I know the basic library is still in development. It's just that I have somehow a feeling that the current speed at which numpypy progresses is rather slow, and it has a huge existing code base of expectations as well as messy backward-compatibility requirements. If we could instead throw that away and attach our wagon to the newer development, even if it takes another couple of years before it becomes usable, then it seems like a long-term win to me. Also, a cffi or cppyy version seems easier than a RPython version for third-party contributors to help maintain, too. A bient?t, Armin. From arigo at tunes.org Thu Jul 31 17:39:18 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 31 Jul 2014 17:39:18 +0200 Subject: [pypy-dev] CFFI and UTF8 In-Reply-To: <53DA56EF.6060609@gmail.com> References: <53DA56EF.6060609@gmail.com> Message-ID: Hi, On 31 July 2014 16:47, Eleytherios Stamatogiannakis wrote: > Wouldn't it be faster to have a ffi.stringUTF8 for the case where we know > the input is in UTF8? It seems the truth is the opposite of what you expect. Right now, `ffi.string(p).decode('utf-8')` does two copies, whereas in the proposed UTF8 future of PyPy the same expression might possibly be done with only one copy (because `s` and `s.decode('utf-8')` could share the same byte string). It doesn't mean the idea of `ffi.stringUTF8()` is necessarily bad, but it should be a CFFI discussion instead of a PyPy one. I'm "-0" on the idea as adding more complexity to the API for just a minor performance gain (particularly one that disappears in the UTF8 future of PyPy). > Ideally we could also have a ffi.stringUTF8const, which knowing that the > char* is const (won't be changed by the C side), won't do a copy at all? That's not possible: a PyPy string object cannot point directly to raw memory, but must contain its own data, just like a CPython (byte)string object. A bient?t, Armin. From wlavrijsen at lbl.gov Thu Jul 31 19:05:26 2014 From: wlavrijsen at lbl.gov (wlavrijsen at lbl.gov) Date: Thu, 31 Jul 2014 10:05:26 -0700 (PDT) Subject: [pypy-dev] cppyy and out arguments In-Reply-To: References: Message-ID: Hi, > A C++ "reference" type like "int &" is implemented like a C++ pointer > type "int *" by most C++ compilers, as far as I know. My guess is > that you simply have to pretend that "int &" is just "int *". wish that were true. Currently, I have no good int&, although int* will work through an array of size 1. One of them things that need integrating with cffi. (I tried once with CPython's ctypes, but the types are not in the public interface.) > It's about the difference between cppyy and PyCintex, which is an > older CPython-only equivalent to cppyy. Yes, and although the documentation is not wrong, it is out of date. There is a 'cppyy.py' module in recent versions of ROOT. Still suffers from the same dependency issues, but recent repositories (scientific software section of linux distro's, and certainly MacPorts) should have it, so although not nice, it's also not the end of the world to install. (Also means that yes, there is a cppyy on llvm these days, but for CPython, available through ROOT6. MacPorts definitely has it, not sure about others. There are some p3 issues, though, as llvm itself uses p2, but those can be worked around.) I've started rewriting all unit tests on the CPython side to be based on pytest and cppyy, and to synchronize them between the two implementations. Is a lot more work than I expected. :P Best regards, Wim -- WLavrijsen at lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net From arigo at tunes.org Thu Jul 31 23:54:53 2014 From: arigo at tunes.org (Armin Rigo) Date: Thu, 31 Jul 2014 23:54:53 +0200 Subject: [pypy-dev] cppyy and out arguments In-Reply-To: References: Message-ID: Hi Wim, Thanks for answering the original question. On 31 July 2014 19:05, wrote: > wish that were true. Currently, I have no good int&, although int* will > work through an array of size 1. One of them things that need integrating > with cffi. (I tried once with CPython's ctypes, but the types are not in > the public interface.) ...I wish I could make sense of what you said, but I'm probably just too tired at the moment. I don't understand what the low-level difference between "int*" and "int&" is. More importantly I don't understand why you would mention cffi/ctypes, given that these two are not about C++ at all. There is certainly no plan to support "int&" or any non-C type in cffi. As far as I know these types are not in ctypes either (public interface or not) for the same reason. They are not in libffi either. A bient?t, Armin.