From carlosedp at gmail.com Thu Dec 1 18:38:58 2011 From: carlosedp at gmail.com (Carlos Eduardo) Date: Thu, 1 Dec 2011 17:38:58 +0000 (UTC) Subject: [pypy-dev] Win32 build of the release 1.7 References: Message-ID: Armin Rigo tunes.org> writes: > > Hi, > > The build has been "officialized" and is now available from > https://bitbucket.org/pypy/pypy/downloads . > > A bient?t, > > Armin. > I tested the 1.7 release on Windows XP using some Stackless Python examples I have. Most of them worked fine, the only problem is that the performance when using Stackless is way slower than the 2.7.1 Stackless build from stackless.com Here is one example: Stackless Python 2.7.1 E:\Dev\Python\Scripts\Stackless>\Apps\stackless2.7\python.exe speed.py Started sleep for 5 seconds. Started sleep for 0.2 seconds. Started sleep for 2 seconds. Started sleep for 1 seconds. Started sleep for 3 seconds. 100000 operations took: 0.18700003624 seconds. Woke after 0.2 seconds. ( 0.203000068665 ) 200000 operations took: 0.344000101089 seconds. 300000 operations took: 0.5 seconds. 400000 operations took: 0.625 seconds. 500000 operations took: 0.733999967575 seconds. Woke after 1 seconds. ( 1.0 ) Woke after 2 seconds. ( 2.0 ) Woke after 3 seconds. ( 3.0 ) Woke after 5 seconds. ( 5.0 ) PyPy 1.7 E:\Dev\Python\Scripts\Stackless>..\..\..\sandbox\pypy-1.7\pypy.exe speed.py Started sleep for 5 seconds. Started sleep for 0.2 seconds. Started sleep for 2 seconds. Started sleep for 1 seconds. Started sleep for 3 seconds. Woke after 0.2 seconds. ( 0.234999895096 ) Woke after 1 seconds. ( 1.0 ) Woke after 2 seconds. ( 2.0 ) Woke after 3 seconds. ( 3.0 ) Woke after 5 seconds. ( 5.0 ) 100000 operations took: 13.8280000687 seconds. 200000 operations took: 25.25 seconds. 300000 operations took: 34.3599998951 seconds. 400000 operations took: 41.2819998264 seconds. 500000 operations took: 46.015999794 seconds. Here is the code used: ------------------------------- import stackless import time sleepingTasklets = [] def Sleep(secondsToWait): ''' Yield the calling tasklet until the given number of seconds have passed. ''' channel = stackless.channel() endTime = time.time() + secondsToWait sleepingTasklets.append((endTime, channel)) sleepingTasklets.sort() # Block until we get sent an awakening notification. channel.receive() def CheckSleepingTasklets(): ''' Function for internal uthread.py usage. ''' while stackless.getruncount() > 1 or sleepingTasklets: if len(sleepingTasklets): endTime = sleepingTasklets[0][0] if endTime <= time.time(): channel = sleepingTasklets[0][1] del sleepingTasklets[0] # We have to send something, but it doesn't matter what as it is not used. channel.send(None) stackless.schedule() stackless.tasklet(CheckSleepingTasklets)() def doStuff(mult=1): st = time.time() c = 0 for i in xrange(int(100000*mult)): c = c + 1 stackless.schedule() print 100000*mult, " operations took: " , time.time() - st , " seconds." def sleepalittle(howmuch): print "Started sleep for", howmuch, " seconds." st = time.time() Sleep(howmuch) print "Woke after ", howmuch, " seconds. (", time.time()-st, ")" stackless.tasklet(doStuff)(1) stackless.tasklet(sleepalittle)(5) stackless.tasklet(sleepalittle)(0.2) stackless.tasklet(doStuff)(2) stackless.tasklet(doStuff)(3) stackless.tasklet(sleepalittle)(2) stackless.tasklet(sleepalittle)(1) stackless.tasklet(sleepalittle)(3) stackless.tasklet(doStuff)(4) stackless.tasklet(doStuff)(5) stackless.run() ------------------------------- Probably this is due to JIT being disabled when using Stackless features. Other than the performance, it's working fine with the examples I tested from: http://code.google.com/p/stacklessexamples/ Congratulations for the great work. From arigo at tunes.org Thu Dec 1 19:19:20 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Dec 2011 19:19:20 +0100 Subject: [pypy-dev] Win32 build of the release 1.7 In-Reply-To: References: Message-ID: Hi, On Thu, Dec 1, 2011 at 18:38, Carlos Eduardo wrote: > (...) Probably this is due to JIT being disabled when using Stackless features. > > Other than the performance, it's working fine with the examples I tested from: > http://code.google.com/p/stacklessexamples/ Great to know. Indeed, it's probably because the JIT is disabled. If you would like to have better performance, then please put your example code in a new bug report (http://bugs.pypy.org/), where people can complain and try to motivate us to fix the issue :-) It is involved, but at least it is a well-defined issue. A bient?t, Armin. From russel at russel.org.uk Fri Dec 2 08:56:50 2011 From: russel at russel.org.uk (Russel Winder) Date: Fri, 02 Dec 2011 07:56:50 +0000 Subject: [pypy-dev] 1.6 -> 1.7 Message-ID: <1322812610.27068.27.camel@anglides.russel.org.uk> http://doc.pypy.org/en/latest/index.html#getting-into-pypy still claims 1.6 is the latest official release! What is the right way of reporting these sorts of thing? -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder at ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel at russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From eric at vanrietpaap.nl Fri Dec 2 09:13:54 2011 From: eric at vanrietpaap.nl (Eric van Riet Paap) Date: Fri, 2 Dec 2011 09:13:54 +0100 Subject: [pypy-dev] 1.6 -> 1.7 In-Reply-To: <1322812610.27068.27.camel@anglides.russel.org.uk> References: <1322812610.27068.27.camel@anglides.russel.org.uk> Message-ID: Same thing goes for http://speed.pypy.org - Eric On Fri, Dec 2, 2011 at 8:56 AM, Russel Winder wrote: > http://doc.pypy.org/en/latest/index.html#getting-into-pypy still claims > 1.6 is the latest official release! > > What is the right way of reporting these sorts of thing? > -- > Russel. > ============================================================================= > Dr Russel Winder ? ? ?t: +44 20 7585 2200 ? voip: sip:russel.winder at ekiga.net > 41 Buckmaster Road ? ?m: +44 7770 465 077 ? xmpp: russel at russel.org.uk > London SW11 1EN, UK ? w: www.russel.org.uk ?skype: russel_winder > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > -- Eric van Riet Paap http://www.be-water.nl | http://ericvanrietpaap.blogspot.com From fijall at gmail.com Fri Dec 2 09:14:10 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 2 Dec 2011 10:14:10 +0200 Subject: [pypy-dev] 1.6 -> 1.7 In-Reply-To: <1322812610.27068.27.camel@anglides.russel.org.uk> References: <1322812610.27068.27.camel@anglides.russel.org.uk> Message-ID: On Fri, Dec 2, 2011 at 9:56 AM, Russel Winder wrote: > http://doc.pypy.org/en/latest/index.html#getting-into-pypy still claims > 1.6 is the latest official release! > > What is the right way of reporting these sorts of thing? This is a pretty good way, thanks fixed! > -- > Russel. > ============================================================================= > Dr Russel Winder ? ? ?t: +44 20 7585 2200 ? voip: sip:russel.winder at ekiga.net > 41 Buckmaster Road ? ?m: +44 7770 465 077 ? xmpp: russel at russel.org.uk > London SW11 1EN, UK ? w: www.russel.org.uk ?skype: russel_winder > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From fijall at gmail.com Fri Dec 2 09:15:51 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 2 Dec 2011 10:15:51 +0200 Subject: [pypy-dev] 1.6 -> 1.7 In-Reply-To: References: <1322812610.27068.27.camel@anglides.russel.org.uk> Message-ID: On Fri, Dec 2, 2011 at 10:13 AM, Eric van Riet Paap wrote: > Same thing goes for http://speed.pypy.org Yeah, thanks, will look at it. > > - Eric > > On Fri, Dec 2, 2011 at 8:56 AM, Russel Winder wrote: >> http://doc.pypy.org/en/latest/index.html#getting-into-pypy still claims >> 1.6 is the latest official release! >> >> What is the right way of reporting these sorts of thing? >> -- >> Russel. >> ============================================================================= >> Dr Russel Winder ? ? ?t: +44 20 7585 2200 ? voip: sip:russel.winder at ekiga.net >> 41 Buckmaster Road ? ?m: +44 7770 465 077 ? xmpp: russel at russel.org.uk >> London SW11 1EN, UK ? w: www.russel.org.uk ?skype: russel_winder >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> > > > > -- > Eric van Riet Paap > http://www.be-water.nl | http://ericvanrietpaap.blogspot.com > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From alexgolecmailinglists at gmail.com Sat Dec 3 21:22:23 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Sat, 3 Dec 2011 15:22:23 -0500 Subject: [pypy-dev] Question about pypy Message-ID: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> Hi all, I'm a student at Columbia University, and I'm taking a graduate course with Alfred Aho, the author of the dragon book, on advanced compilers techniques. I've been researching the pypy project in general, and rpython in particular, and I'd like to ask you guys for some feedback on the current sketch of my presentation. Aho has mentioned on several occasions that he is very excited to receive my talk, and I'd like to get some feedback from you guys about it before I put it forward to him. So then, my talk will discuss rpython's approach to translation, and here is the current outline: - Compiling python to C is easy: just inline the implementation of every opcode handler durr hurr hurr - Ok, seriously, can you do it in a performant manner? - Python has some semantics that make this difficult, in particular: - opcodes are type-agnostic - opcodes are high-level, they do high-level things with high-level arguments. eg. the BUILD_CLASS opcode - opcodes include namespace operations - This type-agnostic bit is the real tricky part because C requires all expressions to have a type, while python does not - Vanilla cartesian product type inference doesn't really work because the number of types is undecidable - rpython gets around this by imposing a restriction on dynamic type creation. - The details of the annotator are omitted due to time constraints So then, the crux of my talk is the type annotator. I won't be going deeply into the flow and object spaces in the talk, deferring to my paper. Is there anything major I'm missing? Did I get anything horribly wrong? Alex From benjamin at python.org Sat Dec 3 21:31:13 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 3 Dec 2011 15:31:13 -0500 Subject: [pypy-dev] Question about pypy In-Reply-To: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> References: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> Message-ID: 2011/12/3 Alexander Golec : > Hi all, > > I'm a student at Columbia University, and I'm taking a graduate course with Alfred Aho, the author of the dragon book, on advanced compilers techniques. I've been researching the pypy project in general, and rpython in particular, and I'd like to ask you guys for some feedback on the current sketch of my presentation. Aho has mentioned on several occasions that he is very excited to receive my talk, and I'd like to get some feedback from you guys about it before I put it forward to him. > > So then, my talk will discuss rpython's approach to translation, and here is the current outline: > > ?- Compiling python to C is easy: just inline the implementation of every opcode handler durr hurr hurr > ?- Ok, seriously, can you do it in a performant manner? > ?- Python has some semantics that make this difficult, in particular: > ? ?- opcodes are type-agnostic > ? ?- opcodes are high-level, they do high-level things with high-level arguments. eg. the BUILD_CLASS opcode > ? ?- opcodes include namespace operations > ?- This type-agnostic bit is the real tricky part because C requires all expressions to have a type, while python does not > ?- Vanilla cartesian product type inference doesn't really work because the number of types is undecidable > ?- rpython gets around this by imposing a restriction on dynamic type creation. > ?- The details of the annotator are omitted due to time constraints So you're not going to detail > > So then, the crux of my talk is the type annotator. about the crux of your talk? -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sat Dec 3 21:33:40 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Sat, 3 Dec 2011 15:33:40 -0500 Subject: [pypy-dev] Question about pypy In-Reply-To: References: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> Message-ID: <747CE6D1-C1BB-4FBF-9425-6D15EC9DFB7A@gmail.com> Erm, I'll revise that. The thesis of the talk is that rpython has to place restrictions on the use of types, and I leave discussing the annotator off to the term paper I'll be presenting. The purpose of this talk is a half-hour exposition of what we've been doing this semester, which basically makes it a promo for the term paper. Alex On Dec 3, 2011, at 3:31 PM, Benjamin Peterson wrote: > 2011/12/3 Alexander Golec : >> Hi all, >> >> I'm a student at Columbia University, and I'm taking a graduate course with Alfred Aho, the author of the dragon book, on advanced compilers techniques. I've been researching the pypy project in general, and rpython in particular, and I'd like to ask you guys for some feedback on the current sketch of my presentation. Aho has mentioned on several occasions that he is very excited to receive my talk, and I'd like to get some feedback from you guys about it before I put it forward to him. >> >> So then, my talk will discuss rpython's approach to translation, and here is the current outline: >> >> - Compiling python to C is easy: just inline the implementation of every opcode handler durr hurr hurr >> - Ok, seriously, can you do it in a performant manner? >> - Python has some semantics that make this difficult, in particular: >> - opcodes are type-agnostic >> - opcodes are high-level, they do high-level things with high-level arguments. eg. the BUILD_CLASS opcode >> - opcodes include namespace operations >> - This type-agnostic bit is the real tricky part because C requires all expressions to have a type, while python does not >> - Vanilla cartesian product type inference doesn't really work because the number of types is undecidable >> - rpython gets around this by imposing a restriction on dynamic type creation. >> - The details of the annotator are omitted due to time constraints > > So you're not going to detail > >> >> So then, the crux of my talk is the type annotator. > > about the crux of your talk? > > -- > Regards, > Benjamin From fijall at gmail.com Sun Dec 4 10:03:45 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 4 Dec 2011 11:03:45 +0200 Subject: [pypy-dev] Question about pypy In-Reply-To: <747CE6D1-C1BB-4FBF-9425-6D15EC9DFB7A@gmail.com> References: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> <747CE6D1-C1BB-4FBF-9425-6D15EC9DFB7A@gmail.com> Message-ID: On Sat, Dec 3, 2011 at 10:33 PM, Alexander Golec wrote: > Erm, I'll revise that. The thesis of the talk is that rpython has to place restrictions on the use of types, and I leave discussing the annotator off to the term paper I'll be presenting. The purpose of this talk is a half-hour exposition of what we've been doing this semester, which basically makes it a promo for the term paper. > > Alex > RPython does more than that. It assumes that globals are constant for example. The important part is that rpython puts enough restrictions that global analysis of the entire system is possible. Also an important point worth mentioning is that Python is a meta-programming language for RPython, which makes RPython much better than Java. > > On Dec 3, 2011, at 3:31 PM, Benjamin Peterson wrote: > >> 2011/12/3 Alexander Golec : >>> Hi all, >>> >>> I'm a student at Columbia University, and I'm taking a graduate course with Alfred Aho, the author of the dragon book, on advanced compilers techniques. I've been researching the pypy project in general, and rpython in particular, and I'd like to ask you guys for some feedback on the current sketch of my presentation. Aho has mentioned on several occasions that he is very excited to receive my talk, and I'd like to get some feedback from you guys about it before I put it forward to him. >>> >>> So then, my talk will discuss rpython's approach to translation, and here is the current outline: >>> >>> ?- Compiling python to C is easy: just inline the implementation of every opcode handler durr hurr hurr >>> ?- Ok, seriously, can you do it in a performant manner? >>> ?- Python has some semantics that make this difficult, in particular: >>> ? ?- opcodes are type-agnostic >>> ? ?- opcodes are high-level, they do high-level things with high-level arguments. eg. the BUILD_CLASS opcode >>> ? ?- opcodes include namespace operations >>> ?- This type-agnostic bit is the real tricky part because C requires all expressions to have a type, while python does not >>> ?- Vanilla cartesian product type inference doesn't really work because the number of types is undecidable >>> ?- rpython gets around this by imposing a restriction on dynamic type creation. >>> ?- The details of the annotator are omitted due to time constraints >> >> So you're not going to detail >> >>> >>> So then, the crux of my talk is the type annotator. >> >> about the crux of your talk? >> >> -- >> Regards, >> Benjamin > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From lac at openend.se Sun Dec 4 14:06:04 2011 From: lac at openend.se (Laura Creighton) Date: Sun, 04 Dec 2011 14:06:04 +0100 Subject: [pypy-dev] Question about pypy References: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> <747CE6D1-C1BB-4FBF-9425-6D15EC9DFB7A@gmail.com> Message-ID: <201112041306.pB4D64oG031168@theraft.openend.se> Something major you are not mentioning is that pypy is a compiler generator, and not a hand-written compiler for a particular language. Thus we have PyProlog, which implements Prolog, and GameGirl which implements the GameBoy language. It's this architecture, in my opinion, which makes PyPy advanced. Just reading your sketch left me with the impression that what you were going to present was a list of problems that somebody wishing to hand-write a compiler for Python that generates C would face. The point of RPython is not that it makes the generation of C code easier, though it does that, but that because its a much higher level language which makes it possible to do things that would be, if not impossible, at least a hugely harder than if it were all written in C. So, given an interpreter for any dynamic language, written in RPython, you can now get your choice of garbage collectors, and a JIT thrown in 'for free' as it were. You won't have to write one by hand for every lanauge you are interested in -- including ones you design yourself if your mind works that way. So, for instance, nobody has written an awk interpreter in RPython (at least as far as I know) but it wouldn't be a very difficult thing to do. And once that is done, you would have a tracing awk compiler with a generational garbage collector -- which ought to be very fast. No need to write those parts again for your fast-awk. Thus the challenge for RPython was to be high-level and expressive enough to make it possible to write a pluggable gc, or a pluggable JIT, or indeed the annotator itself, while still being static enough that you can do whole program analysis in the first place. That's the 'real tricky part', not as you write 'This type-agnostic bit is the real tricky part because C requires all expressions to have a type, while python does not'. It's not the need to generate C that imposes the limits on RPython -- indeed, in the past we have generated for the JVM and the CLI -- and played with generating javascript and Lisp, its the need to do whole program analysis. Other note: Some of the problems in making a fast compiler for Python are those it shares with any dynamic language. Some are unique to Python itself, and it might be a good idea to separate the two in your presentation. You might want to look at the slides of the talk that Armin gave this March at Stanford. http://www.stanford.edu/class/ee380/Abstracts/110302.html The whole thing was videotaped, I think you can find it here: http://www.stanford.edu/class/ee380/winter-schedule-20102011.html Good luck, Laura Creighton From valentin.perrelle at orange.fr Sun Dec 4 18:58:33 2011 From: valentin.perrelle at orange.fr (Valentin Perrelle) Date: Sun, 04 Dec 2011 18:58:33 +0100 Subject: [pypy-dev] Metaprogramming with syntax extension Message-ID: <4EDBB4C9.8090902@orange.fr> Hi, Last months i played a bit with syntax extension in order to develop a network programming paradigm inspired by the Unreal Script language. I did that with LUA and the help of metalua which use a lua lua compiler. I was wondering if i could do the same with Python. Is it possible with Pypy to dynamically (or even statically) change the parser in order to accept new syntax constructions ? All i could need is to perform program transformation : any program with the new syntax could be translated to a valid python program without the syntax extension. Basically, i need to change the way some class methods are called and some properties are accessed. Of course, it could be done with the dedicated metatmethods. But the amount of code needed to perform this would be big, repetitive, uninformative and hardly factorizable. Valentin Perrelle. From benjamin at python.org Sun Dec 4 19:51:28 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 4 Dec 2011 13:51:28 -0500 Subject: [pypy-dev] Metaprogramming with syntax extension In-Reply-To: <4EDBB4C9.8090902@orange.fr> References: <4EDBB4C9.8090902@orange.fr> Message-ID: 2011/12/4 Valentin Perrelle : > Hi, > > Last months i played a bit with syntax extension in order to develop a > network programming paradigm inspired by the Unreal Script language. I did > that with LUA and the help of metalua which use a lua lua compiler. I was > wondering if i could do the same with Python. > > Is it possible with Pypy to dynamically (or even statically) change the > parser in order to accept new syntax constructions ? You would have to modify the grammar in pypy/interpreter/pyparser/data and transform it into some reasonable AST in pypy/interpreter/astcompiler/astbuilder.py. -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sun Dec 4 20:07:35 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Sun, 4 Dec 2011 14:07:35 -0500 Subject: [pypy-dev] Question about pypy In-Reply-To: <201112041306.pB4D64oG031168@theraft.openend.se> References: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> <747CE6D1-C1BB-4FBF-9425-6D15EC9DFB7A@gmail.com> <201112041306.pB4D64oG031168@theraft.openend.se> Message-ID: <5D58A762-0131-47C3-B161-ED86D7B2BFBF@gmail.com> Thanks for this point. I think this tied together my understanding of the the purpose of rpython as a framework. Where is the distinction between rpython as a language and rpython as a compiler generator? I presume this is covered in the 'Compiling Dynamic Language Implementations' paper? Alex On Dec 4, 2011, at 8:06 AM, Laura Creighton wrote: > Something major you are not mentioning is that pypy is a compiler > generator, and not a hand-written compiler for a particular language. > Thus we have PyProlog, which implements Prolog, and GameGirl which > implements the GameBoy language. It's this architecture, in my opinion, > which makes PyPy advanced. Just reading your sketch left me with the > impression that what you were going to present was a list of problems > that somebody wishing to hand-write a compiler for Python that generates > C would face. The point of RPython is not that it makes the generation > of C code easier, though it does that, but that because its a much > higher level language which makes it possible to do things that would > be, if not impossible, at least a hugely harder than if it were all > written in C. > > So, given an interpreter for any dynamic language, written in RPython, > you can now get your choice of garbage collectors, and a JIT thrown in > 'for free' as it were. You won't have to write one by hand for every > lanauge you are interested in -- including ones you design yourself > if your mind works that way. So, for instance, nobody has written an > awk interpreter in RPython (at least as far as I know) but it wouldn't > be a very difficult thing to do. And once that is done, you would > have a tracing awk compiler with a generational garbage collector -- > which ought to be very fast. No need to write those parts again for > your fast-awk. > > Thus the challenge for RPython was to be high-level and expressive > enough to make it possible to write a pluggable gc, or a pluggable > JIT, or indeed the annotator itself, while still being static enough > that you can do whole program analysis in the first place. That's the > 'real tricky part', not as you write 'This type-agnostic bit is the > real tricky part because C requires all expressions to have a type, > while python does not'. It's not the need to generate C that imposes > the limits on RPython -- indeed, in the past we have generated for the > JVM and the CLI -- and played with generating javascript and Lisp, its > the need to do whole program analysis. > > Other note: Some of the problems in making a fast compiler for Python > are those it shares with any dynamic language. Some are unique to > Python itself, and it might be a good idea to separate the two in your > presentation. > > You might want to look at the slides of the talk that Armin gave this > March at Stanford. http://www.stanford.edu/class/ee380/Abstracts/110302.html > > The whole thing was videotaped, I think you can find it here: > http://www.stanford.edu/class/ee380/winter-schedule-20102011.html > > Good luck, > Laura Creighton > > From fijall at gmail.com Sun Dec 4 20:28:53 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 4 Dec 2011 21:28:53 +0200 Subject: [pypy-dev] Question about pypy In-Reply-To: <5D58A762-0131-47C3-B161-ED86D7B2BFBF@gmail.com> References: <2580F298-6EA4-4C0C-8AC2-A7436065FFE7@gmail.com> <747CE6D1-C1BB-4FBF-9425-6D15EC9DFB7A@gmail.com> <201112041306.pB4D64oG031168@theraft.openend.se> <5D58A762-0131-47C3-B161-ED86D7B2BFBF@gmail.com> Message-ID: On Sun, Dec 4, 2011 at 9:07 PM, Alexander Golec wrote: > Thanks for this point. I think this tied together my understanding of the the purpose of rpython as a framework. Where is the distinction between rpython as a language and rpython as a compiler generator? I presume this is covered in the 'Compiling Dynamic Language Implementations' paper? There is no distinction between rpython as a language and rpython as a compiler generator. it's just that rpython is a good language to write dynamic language VMs in (or we think so, works fine so far). It's also not really truly for compilers, just for VMs. > > Alex > > > On Dec 4, 2011, at 8:06 AM, Laura Creighton wrote: > >> Something major you are not mentioning is that pypy is a compiler >> generator, and not a hand-written compiler for a particular language. >> Thus we have PyProlog, which implements Prolog, and GameGirl which >> implements the GameBoy language. ?It's this architecture, in my opinion, >> which makes PyPy advanced. ?Just reading your sketch left me with the >> impression that what you were going to present was a list of problems >> that somebody wishing to hand-write a compiler for Python that generates >> C would face. ?The point of RPython is not that it makes the generation >> of C code easier, though it does that, but that because its a much >> higher level language which makes it possible to do things that would >> be, if not impossible, at least a hugely harder than if it were all >> written in C. >> >> So, given an interpreter for any dynamic language, written in RPython, >> you can now get your choice of garbage collectors, and a JIT thrown in >> 'for free' as it were. ?You won't have to write one by hand for every >> lanauge you are interested in -- including ones you design yourself >> if your mind works that way. ?So, for instance, nobody has written an >> awk interpreter in RPython (at least as far as I know) but it wouldn't >> be a very difficult thing to do. ?And once that is done, you would >> have a tracing awk compiler with a generational garbage collector -- >> which ought to be very fast. ?No need to write those parts again for >> your fast-awk. >> >> Thus the challenge for RPython was to be high-level and expressive >> enough to make it possible to write a pluggable gc, or a pluggable >> JIT, or indeed the annotator itself, while still being static enough >> that you can do whole program analysis in the first place. ?That's the >> 'real tricky part', not as you write 'This type-agnostic bit is the >> real tricky part because C requires all expressions to have a type, >> while python does not'. ?It's not the need to generate C that imposes >> the limits on RPython -- indeed, in the past we have generated for the >> JVM and the CLI -- and played with generating javascript and Lisp, its >> the need to do whole program analysis. >> >> Other note: Some of the problems in making a fast compiler for Python >> are those it shares with any dynamic language. ?Some are unique to >> Python itself, and it might be a good idea to separate the two in your >> presentation. >> >> You might want to look at the slides of the talk that Armin gave this >> March at Stanford. http://www.stanford.edu/class/ee380/Abstracts/110302.html >> >> The whole thing was videotaped, I think you can find it here: >> http://www.stanford.edu/class/ee380/winter-schedule-20102011.html >> >> Good luck, >> Laura Creighton >> >> > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From ram at rachum.com Sun Dec 4 23:30:39 2011 From: ram at rachum.com (Ram Rachum) Date: Mon, 5 Dec 2011 00:30:39 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: Hey guys, I'm happy to say that I don't see this crash anymore in PyPy 1.7! Thanks! Ram. On Mon, Oct 10, 2011 at 5:13 PM, Ram Rachum wrote: > Tried it now with this Zip, getting the same crash. > > > On Mon, Oct 10, 2011 at 4:58 PM, Armin Rigo wrote: > >> Hi, >> >> On Mon, Oct 10, 2011 at 13:11, Ram Rachum wrote: >> > Trying to download this file results in getting a tiny corrupted >> archive. >> > Also, I don't know whether this is a source release or a binary >> release. I >> > don't know to compile so I can only use a binary one. >> >> Bah, I don't know why the recent zip files are all empty. Here is the >> latest non-empty one: >> >> http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-47320-6b92b3aa1cbb-win32.zip >> >> It's a binary release. >> >> >> A bient?t, >> >> Armin. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shellacerna at yahoo.com Sun Dec 4 23:29:17 2011 From: shellacerna at yahoo.com (Ashley) Date: Sun, 4 Dec 2011 22:29:17 +0000 (GMT) Subject: [pypy-dev] Product Recommended by Ashley Message-ID: Dear pypy-dev, I hope you've got a good attorney. . . Because once you flick this switch on this NEW lethal 'business in a box' machine. . . http://wealthgoodincome.co.cc/rpa1.php?e=pypy-dev at codespeak.net The money starts rushing in so fast. . .you might think you broke some State or Federal laws. . . But not to worry. It's actually COMPLETELY legal.. And yet it's probably the most powerful 'Business in a Box' system that's ever come across my desk. I'm not so sure you're ready for something QUITE this powerful? Or are you. . . Let me know what you think. http://wealthgoodincome.co.cc/rpa1.php?e=pypy-dev at codespeak.net To your success, support supremeSoft LLC 302 Park Avenue, New York New YorkY 10022 USA P.S. If that video doesn't get your heart racing. . .you might have to check your pulse for signs of life. . . ~~~~~~~~~~~~~~~~~~~~~~~~~~ To view this product please follow the link below: http://www.greentoysandgames.co.uk/index.php?_a=viewProd&productId=290 ~~~~~~~~~~~~~~~~~~~~~~~~~~ This email was sent from http://www.greentoysandgames.co.uk Sender's IP Address: 180.190.136.128 From fijall at gmail.com Mon Dec 5 08:51:41 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 5 Dec 2011 09:51:41 +0200 Subject: [pypy-dev] PyPy 1.6 not working on Windows XP In-Reply-To: References: <1RBXj4-12VLJA0@fwd18.aul.t-online.de> Message-ID: On Mon, Dec 5, 2011 at 12:30 AM, Ram Rachum wrote: > Hey guys, > > I'm happy to say that I don't see this crash anymore in PyPy 1.7! Thanks! > Pleasure! > > Ram. > > On Mon, Oct 10, 2011 at 5:13 PM, Ram Rachum wrote: >> >> Tried it now with this Zip, getting the same crash. >> >> >> On Mon, Oct 10, 2011 at 4:58 PM, Armin Rigo wrote: >>> >>> Hi, >>> >>> On Mon, Oct 10, 2011 at 13:11, Ram Rachum wrote: >>> > Trying to download this file results in getting a tiny corrupted >>> > archive. >>> > Also, I don't know whether this is a source release or a binary >>> > release. I >>> > don't know to compile so I can only use a binary one. >>> >>> Bah, I don't know why the recent zip files are all empty. ?Here is the >>> latest non-empty one: >>> >>> http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-47320-6b92b3aa1cbb-win32.zip >>> >>> It's a binary release. >>> >>> >>> A bient?t, >>> >>> Armin. >> >> > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > From thejfasi at gmail.com Tue Dec 6 05:04:40 2011 From: thejfasi at gmail.com (Alexander Golec) Date: Mon, 5 Dec 2011 23:04:40 -0500 Subject: [pypy-dev] How are namespaces implemented in rpython? Message-ID: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> Hey all, I'm preparing a presentation for Alfred Aho at Columbia, and I'd like to ask how are namespaces translated during the translation phase. Are they implemented dynamically, or are they actually compiled down to C? Alex From alex.gaynor at gmail.com Tue Dec 6 05:33:20 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 5 Dec 2011 23:33:20 -0500 Subject: [pypy-dev] How are namespaces implemented in rpython? In-Reply-To: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> References: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> Message-ID: On Mon, Dec 5, 2011 at 11:04 PM, Alexander Golec wrote: > Hey all, > > I'm preparing a presentation for Alfred Aho at Columbia, and I'd like to > ask how are namespaces translated during the translation phase. Are they > implemented dynamically, or are they actually compiled down to C? > > Alex > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > If I understand you correctly, you're asking if the notion of a module (or namespace) exists at runtime for an RPython program, or whether that's resolved at compile time? The answer to that question is that all imports in RPython are resolved at compile time, and thus modules are a purely compile time concept. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Tue Dec 6 05:33:22 2011 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 5 Dec 2011 23:33:22 -0500 Subject: [pypy-dev] How are namespaces implemented in rpython? In-Reply-To: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> References: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> Message-ID: 2011/12/5 Alexander Golec : > Hey all, > > I'm preparing a presentation for Alfred Aho at Columbia, and I'd like to ask how are namespaces translated during the translation phase. Are they implemented dynamically, or are they actually compiled down to C? I assume you mean modules? They're all constant folded out by the flow space. -- Regards, Benjamin From thejfasi at gmail.com Tue Dec 6 05:36:26 2011 From: thejfasi at gmail.com (Alexander Golec) Date: Mon, 5 Dec 2011 23:36:26 -0500 Subject: [pypy-dev] How are namespaces implemented in rpython? In-Reply-To: References: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> Message-ID: <2504A37C-D623-47B7-9525-65D7508CF7C7@gmail.com> Impressive. I would think that there would be some trouble with names that may or may not be included in the namespace depending on execution of the module. The module would still need to be executed at runtime, but I'm guessing the names that it might produce are bounded? Alex On Dec 5, 2011, at 11:33 PM, Alex Gaynor wrote: > > > On Mon, Dec 5, 2011 at 11:04 PM, Alexander Golec wrote: > Hey all, > > I'm preparing a presentation for Alfred Aho at Columbia, and I'd like to ask how are namespaces translated during the translation phase. Are they implemented dynamically, or are they actually compiled down to C? > > Alex > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > If I understand you correctly, you're asking if the notion of a module (or namespace) exists at runtime for an RPython program, or whether that's resolved at compile time? > > The answer to that question is that all imports in RPython are resolved at compile time, and thus modules are a purely compile time concept. > > Alex > > -- > "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Tue Dec 6 05:37:32 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 5 Dec 2011 23:37:32 -0500 Subject: [pypy-dev] How are namespaces implemented in rpython? In-Reply-To: <2504A37C-D623-47B7-9525-65D7508CF7C7@gmail.com> References: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> <2504A37C-D623-47B7-9525-65D7508CF7C7@gmail.com> Message-ID: On Mon, Dec 5, 2011 at 11:36 PM, Alexander Golec wrote: > Impressive. I would think that there would be some trouble with names that > may or may not be included in the namespace depending on execution of the > module. The module would still need to be executed at runtime, but I'm > guessing the names that it might produce are bounded? > > Alex > > > > On Dec 5, 2011, at 11:33 PM, Alex Gaynor wrote: > > > > On Mon, Dec 5, 2011 at 11:04 PM, Alexander Golec wrote: > >> Hey all, >> >> I'm preparing a presentation for Alfred Aho at Columbia, and I'd like to >> ask how are namespaces translated during the translation phase. Are they >> implemented dynamically, or are they actually compiled down to C? >> >> Alex >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> > > If I understand you correctly, you're asking if the notion of a module (or > namespace) exists at runtime for an RPython program, or whether that's > resolved at compile time? > > The answer to that question is that all imports in RPython are resolved at > compile time, and thus modules are a purely compile time concept. > > Alex > > -- > "I disapprove of what you say, but I will defend to the death your right > to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > > > During compilation it is statically known what names exist within a module, this is one of the restrictions of RPython. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexgolecmailinglists at gmail.com Tue Dec 6 05:40:33 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Mon, 5 Dec 2011 23:40:33 -0500 Subject: [pypy-dev] How are namespaces implemented in rpython? In-Reply-To: <2504A37C-D623-47B7-9525-65D7508CF7C7@gmail.com> References: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> <2504A37C-D623-47B7-9525-65D7508CF7C7@gmail.com> Message-ID: <8971458C-14D8-4092-A4F7-538ED05FC6FF@gmail.com> I'm guessing this explains this fragment from the rpython coding guide: constants all module globals are considered constants. Their binding must not be changed at run-time. Moreover, global (i.e. prebuilt) lists and dictionaries are supposed to be immutable: modifying e.g. a global list will give inconsistent results. However, global instances don?t have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. Alex On Dec 5, 2011, at 11:36 PM, Alexander Golec wrote: > Impressive. I would think that there would be some trouble with names that may or may not be included in the namespace depending on execution of the module. The module would still need to be executed at runtime, but I'm guessing the names that it might produce are bounded? > > Alex > > > On Dec 5, 2011, at 11:33 PM, Alex Gaynor wrote: > >> >> >> On Mon, Dec 5, 2011 at 11:04 PM, Alexander Golec wrote: >> Hey all, >> >> I'm preparing a presentation for Alfred Aho at Columbia, and I'd like to ask how are namespaces translated during the translation phase. Are they implemented dynamically, or are they actually compiled down to C? >> >> Alex >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev >> >> If I understand you correctly, you're asking if the notion of a module (or namespace) exists at runtime for an RPython program, or whether that's resolved at compile time? >> >> The answer to that question is that all imports in RPython are resolved at compile time, and thus modules are a purely compile time concept. >> >> Alex >> >> -- >> "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) >> "The people's good is the highest law." -- Cicero >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Tue Dec 6 11:54:14 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 06 Dec 2011 11:54:14 +0100 Subject: [pypy-dev] How are namespaces implemented in rpython? In-Reply-To: <8971458C-14D8-4092-A4F7-538ED05FC6FF@gmail.com> References: <6464F1B8-5391-49D8-BFB1-ADF81C76A6A6@gmail.com> <2504A37C-D623-47B7-9525-65D7508CF7C7@gmail.com> <8971458C-14D8-4092-A4F7-538ED05FC6FF@gmail.com> Message-ID: <4EDDF456.8010709@gmx.de> On 12/06/2011 05:40 AM, Alexander Golec wrote: > I'm guessing this explains this fragment from the rpython coding guide: > > *constants* > > all module globals are considered constants. Their binding must not > be changed at run-time. Moreover, global (i.e. prebuilt) lists and > dictionaries are supposed to be immutable: modifying e.g. a global > list will give inconsistent results. However, global instances don?t > have this restriction, so if you need mutable global state, store it > in the attributes of some prebuilt singleton instance. Exactly. Cheers, Carl Friedrich From asouzaleite at gmx.de Wed Dec 7 09:31:12 2011 From: asouzaleite at gmx.de (Aroldo Souza-Leite) Date: Wed, 07 Dec 2011 09:31:12 +0100 Subject: [pypy-dev] problem building a Grok project In-Reply-To: References: <4ECBA08C.6090105@gmail.com> Message-ID: <4EDF2450.7070204@gmx.de> Hi, I' m failing to install a Grok project with 'groproject' in a PyPy virtual Environment. I don't have this problem on a regular Python-2.7 virtual environment. pypy-c-jit-50199-24a17a8610e1-linux Ubuntu Lucid Lynx Regards, Aroldo. ------ links ----- http://pypi.python.org/pypi/grokproject http://grok.zope.org/about/download ------------ output ------------------------------------- (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ which python /home/aroldo/tmp/python/tmp-env-pypy/bin/python (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ python Python 2.7.1 (24a17a8610e1, Dec 06 2011, 04:30:54) [PyPy 1.7.1-dev0 with GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. executing the Python startup file: "~/.pystartup" And now for something completely different: ``out-of-lie-guards'' >>>> quit() (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ which pip /home/aroldo/tmp/python/tmp-env-pypy/bin/pip (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ env|grep ENV VIRTUAL_ENV=/home/aroldo/tmp/python/tmp-env-pypy PIP_ENVIRONMENT=/home/aroldo/tmp/python/tmp-env-pypy (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ pip install grokproject Downloading/unpacking grokproject Downloading grokproject-2.6.tar.gz (68Kb): 68Kb downloaded Running setup.py egg_info for package grokproject Downloading/unpacking PasteScript>=1.6 (from grokproject) Downloading PasteScript-1.7.5.tar.gz (129Kb): 129Kb downloaded Running setup.py egg_info for package PasteScript ... Installing grokproject script to /home/aroldo/tmp/python/tmp-env-pypy/bin Running setup.py install for PasteScript ... Successfully installed grokproject PasteScript Paste PasteDeploy Cleaning up... (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ which grokproject /home/aroldo/tmp/python/tmp-env-pypy/bin/grokproject (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ grokproject MyCave Enter user (Name of an initial administrator user): grok Enter passwd (Password for the initial administrator user): Traceback (most recent call last): File "app_main.py", line 51, in run_toplevel File "/home/aroldo/tmp/python/tmp-env-pypy/bin/grokproject", line 8, in load_entry_point('grokproject==2.6', 'console_scripts', 'grokproject')() File "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/grokproject/main.py", line 91, in main exit_code = runner.run(option_args + ['-t', template_name, project] + extra_args) File "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/command.py", line 238, in run result = self.command() File "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/create_distro.py", line 125, in command vars = template.check_vars(vars, self) File "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/grokproject/templates.py", line 66, in check_vars vars = super(GrokProject, self).check_vars(vars, cmd) File "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/templates.py", line 73, in check_vars response = cmd.challenge(prompt, var.default, var.should_echo) File "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/command.py", line 321, in challenge response = prompt_method(prompt).strip() File "/opt/pypy/lib-python/2.7/getpass.py", line 74, in unix_getpass stream.flush() # issue7208 IOError: [Errno 29] Illegal seek: '' (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ From santagada at gmail.com Wed Dec 7 14:44:43 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Wed, 7 Dec 2011 11:44:43 -0200 Subject: [pypy-dev] problem building a Grok project In-Reply-To: <4EDF2450.7070204@gmx.de> References: <4ECBA08C.6090105@gmail.com> <4EDF2450.7070204@gmx.de> Message-ID: On Wed, Dec 7, 2011 at 6:31 AM, Aroldo Souza-Leite wrote: > > > I' m failing to install a Grok project with 'groproject' in a PyPy virtual > Environment. I don't have this problem on a regular Python-2.7 virtual > environment. Your error seems like a pypy bug, but are you sure grok is compatible with pypy? Doesn't it need ZODB? -- Leonardo Santagada From fijall at gmail.com Wed Dec 7 14:50:17 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 7 Dec 2011 15:50:17 +0200 Subject: [pypy-dev] problem building a Grok project In-Reply-To: <4EDF2450.7070204@gmx.de> References: <4ECBA08C.6090105@gmail.com> <4EDF2450.7070204@gmx.de> Message-ID: On Wed, Dec 7, 2011 at 10:31 AM, Aroldo Souza-Leite wrote: > Hi, > > I' m failing to install a Grok project with 'groproject' in a PyPy virtual > Environment. I don't have this problem on a regular Python-2.7 virtual > environment. > > pypy-c-jit-50199-24a17a8610e1-linux > Ubuntu Lucid Lynx > > Regards, > > Aroldo. > > ------ ?links ----- > > http://pypi.python.org/pypi/grokproject > http://grok.zope.org/about/download > > ------------ output ------------------------------------- > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ which python > /home/aroldo/tmp/python/tmp-env-pypy/bin/python > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ python > Python 2.7.1 (24a17a8610e1, Dec 06 2011, 04:30:54) > [PyPy 1.7.1-dev0 with GCC 4.4.3] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > executing the Python startup file: "~/.pystartup" > And now for something completely different: ``out-of-lie-guards'' >>>>> quit() > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ which pip > /home/aroldo/tmp/python/tmp-env-pypy/bin/pip > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ env|grep ENV > VIRTUAL_ENV=/home/aroldo/tmp/python/tmp-env-pypy > PIP_ENVIRONMENT=/home/aroldo/tmp/python/tmp-env-pypy > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ pip install grokproject > Downloading/unpacking grokproject > ?Downloading grokproject-2.6.tar.gz (68Kb): 68Kb downloaded > ?Running setup.py egg_info for package grokproject > > Downloading/unpacking PasteScript>=1.6 (from grokproject) > ?Downloading PasteScript-1.7.5.tar.gz (129Kb): 129Kb downloaded > ?Running setup.py egg_info for package PasteScript > ... > ? ? Installing grokproject script to > /home/aroldo/tmp/python/tmp-env-pypy/bin > ?Running setup.py install for PasteScript > ... > Successfully installed grokproject PasteScript Paste PasteDeploy > Cleaning up... > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ which grokproject > /home/aroldo/tmp/python/tmp-env-pypy/bin/grokproject > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ grokproject MyCave > Enter user (Name of an initial administrator user): grok > Enter passwd (Password for the initial administrator user): Traceback (most > recent call last): > ?File "app_main.py", line 51, in run_toplevel > ?File "/home/aroldo/tmp/python/tmp-env-pypy/bin/grokproject", line 8, in > > ? ?load_entry_point('grokproject==2.6', 'console_scripts', 'grokproject')() > ?File > "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/grokproject/main.py", > line 91, in main > ? ?exit_code = runner.run(option_args + ['-t', template_name, project] + > extra_args) > ?File > "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/command.py", > line 238, in run > ? ?result = self.command() > ?File > "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/create_distro.py", > line 125, in command > ? ?vars = template.check_vars(vars, self) > ?File > "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/grokproject/templates.py", > line 66, in check_vars > ? ?vars = super(GrokProject, self).check_vars(vars, cmd) > ?File > "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/templates.py", > line 73, in check_vars > ? ?response = cmd.challenge(prompt, var.default, var.should_echo) > ?File > "/home/aroldo/tmp/python/tmp-env-pypy/site-packages/paste/script/command.py", > line 321, in challenge > ? ?response = prompt_method(prompt).strip() > ?File "/opt/pypy/lib-python/2.7/getpass.py", line 74, in unix_getpass > ? ?stream.flush() ?# issue7208 > IOError: [Errno 29] Illegal seek: '' > (tmp-env-pypy)aroldo at aroldo-laptop:~/tmp/python$ > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev This does look like a pypy bug, in fact it may be a dupe of https://bugs.pypy.org/issue956 but it has to be doublechecked. From arigo at tunes.org Wed Dec 7 18:13:00 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 7 Dec 2011 18:13:00 +0100 Subject: [pypy-dev] pypy/bin/checkmodule.py Message-ID: Hi all, I fixed pypy/bin/checkmodules.py (I actually rewrote it from scratch). It may no longer accept the _clr module, I didn't check, but now it accepts a good number of other built-in modules. For example micronumpy. So I would recommend micronumpy developers to use it before they check in :-) It gives a "translates / does not translate" answer in 30 seconds. A bient?t, Armin. From fijall at gmail.com Wed Dec 7 19:07:43 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 7 Dec 2011 20:07:43 +0200 Subject: [pypy-dev] pypy/bin/checkmodule.py In-Reply-To: References: Message-ID: On Wed, Dec 7, 2011 at 7:13 PM, Armin Rigo wrote: > Hi all, > > I fixed pypy/bin/checkmodules.py (I actually rewrote it from scratch). > ?It may no longer accept the _clr module, I didn't check, but now it > accepts a good number of other built-in modules. > > For example micronumpy. ?So I would recommend micronumpy developers to > use it before they check in :-) ?It gives a "translates / does not > translate" answer in 30 seconds. > Does it accept -Ojit? It has been traditionally a big cause of problems with micronumpy. Anyway, jokes aside, thanks :) > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From fijall at gmail.com Wed Dec 7 19:09:56 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 7 Dec 2011 20:09:56 +0200 Subject: [pypy-dev] pypy/bin/checkmodule.py In-Reply-To: References: Message-ID: On Wed, Dec 7, 2011 at 8:07 PM, Maciej Fijalkowski wrote: > On Wed, Dec 7, 2011 at 7:13 PM, Armin Rigo wrote: >> Hi all, >> >> I fixed pypy/bin/checkmodules.py (I actually rewrote it from scratch). >> ?It may no longer accept the _clr module, I didn't check, but now it >> accepts a good number of other built-in modules. >> >> For example micronumpy. ?So I would recommend micronumpy developers to >> use it before they check in :-) ?It gives a "translates / does not >> translate" answer in 30 seconds. >> > > Does it accept -Ojit? It has been traditionally a big cause of > problems with micronumpy. > > Anyway, jokes aside, thanks :) > >> >> A bient?t, >> >> Armin. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev by the way. how about adding checkmodule runs to module tests then? From arigo at tunes.org Wed Dec 7 20:06:38 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 7 Dec 2011 20:06:38 +0100 Subject: [pypy-dev] pypy/bin/checkmodule.py In-Reply-To: References: Message-ID: Hi, On Wed, Dec 7, 2011 at 19:09, Maciej Fijalkowski wrote: >> Does it accept -Ojit? It has been traditionally a big cause of >> problems with micronumpy. >> >> Anyway, jokes aside, thanks :) Jokes aside, it just annotates and rtypes, so I think the -O option makes no difference at all --- but I'm not completely sure about that claim. The config options can be tweaked if needed... > by the way. how about adding checkmodule runs to module tests then? Good idea. I added a few modules to pypy/objspace/fake/test/test_zmodule, but indeed these tests should be moved to their respective modules. A bient?t, Armin. From fijall at gmail.com Wed Dec 7 20:17:02 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 7 Dec 2011 21:17:02 +0200 Subject: [pypy-dev] pypy/bin/checkmodule.py In-Reply-To: References: Message-ID: On Wed, Dec 7, 2011 at 9:06 PM, Armin Rigo wrote: > Hi, > > On Wed, Dec 7, 2011 at 19:09, Maciej Fijalkowski wrote: >>> Does it accept -Ojit? It has been traditionally a big cause of >>> problems with micronumpy. >>> >>> Anyway, jokes aside, thanks :) > > Jokes aside, it just annotates and rtypes, so I think the -O option > makes no difference at all --- but I'm not completely sure about that > claim. ?The config options can be tweaked if needed... > >> by the way. how about adding checkmodule runs to module tests then? > > Good idea. ?I added a few modules to > pypy/objspace/fake/test/test_zmodule, but indeed these tests should be > moved to their respective modules. > > > A bient?t, > > Armin. I moved the numpy one there. Also enabled list comprehension. From rinu.matrix at gmail.com Thu Dec 8 06:30:24 2011 From: rinu.matrix at gmail.com (Rinu Boney) Date: Thu, 8 Dec 2011 11:00:24 +0530 Subject: [pypy-dev] RPython Message-ID: [ forgive me if it is a dumb question ] is there any article or a tutorial that shows how to use the rpython toolkit for a beginner ( just to hack around and learn stuff ) ? what is the relation between rpythonic ( http://code.google.com/p/rpythonic/ ) and rpython ? where can i get more information on rpython ( i have already seen what is written in the coding guide! ) ? - Rinu. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Thu Dec 8 07:01:00 2011 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 8 Dec 2011 01:01:00 -0500 Subject: [pypy-dev] RPython In-Reply-To: References: Message-ID: 2011/12/8 Rinu Boney : > [ forgive me if it is a dumb question ] > is there any article or a tutorial that shows how to use the rpython toolkit > for a beginner ( just to hack around and learn stuff ) ? > what is the relation between rpythonic > (?http://code.google.com/p/rpythonic/?) and rpython ? Zero. > where can i get more information on rpython ( i have already seen what is > written in the coding guide! ) ? You can look through http://morepypy.blogspot.com . There are some tutorials on writing interpreters there. -- Regards, Benjamin From justinnoah at gmail.com Thu Dec 8 09:09:49 2011 From: justinnoah at gmail.com (Justin Noah) Date: Thu, 8 Dec 2011 00:09:49 -0800 Subject: [pypy-dev] Clang benchmarks Message-ID: Here are the llvm/clang build using the shadowstack gc. What do you think? Also, I will be downloading the prebuilt binary and running benchmarks and post them as well. Report on Linux infinity 3.1.1-gentoo #1 SMP PREEMPT Sun Nov 20 03:57:03 PST 2011 x86_64 AMD Turion(tm) II Dual-Core Mobile M520 Total CPU cores: 2 ### ai ### Min: 0.140570 -> 0.128204: 1.0965x faster Avg: 0.158339 -> 0.148346: 1.0674x faster Significant (t=2.351561, a=0.95) Stddev: 0.01762 -> 0.02434: 1.3809x larger ### bm_chameleon ### Min: 0.051691 -> 0.051436: 1.0050x faster Avg: 0.062364 -> 0.065282: 1.0468x slower Not significant Stddev: 0.01720 -> 0.01883: 1.0947x larger ### bm_mako ### Min: 0.152917 -> 0.153566: 1.0042x slower Avg: 0.169574 -> 0.169380: 1.0011x faster Not significant Stddev: 0.02681 -> 0.02674: 1.0025x smaller ### chaos ### Min: 0.024781 -> 0.024249: 1.0219x faster Avg: 0.037913 -> 0.037324: 1.0158x faster Not significant Stddev: 0.07050 -> 0.07000: 1.0072x smaller ### crypto_pyaes ### Min: 0.121740 -> 0.121273: 1.0038x faster Avg: 0.136873 -> 0.136358: 1.0038x faster Not significant Stddev: 0.05667 -> 0.05649: 1.0033x smaller ### django ### Min: 0.104028 -> 0.100156: 1.0387x faster Avg: 0.116562 -> 0.114053: 1.0220x faster Significant (t=2.133187, a=0.95) Stddev: 0.00590 -> 0.00586: 1.0077x smaller ### fannkuch ### Min: 0.514067 -> 0.536132: 1.0429x slower Avg: 0.520998 -> 0.542397: 1.0411x slower Significant (t=-6.626659, a=0.95) Stddev: 0.01654 -> 0.01575: 1.0501x smaller ### float ### Min: 0.085486 -> 0.084867: 1.0073x faster Avg: 0.106569 -> 0.104426: 1.0205x faster Not significant Stddev: 0.01745 -> 0.01596: 1.0931x smaller ### go ### Min: 0.271402 -> 0.269337: 1.0077x faster Avg: 0.483500 -> 0.483173: 1.0007x faster Not significant Stddev: 0.19943 -> 0.19894: 1.0025x smaller ### html5lib ### Min: 5.856357 -> 5.745086: 1.0194x faster Avg: 7.952101 -> 7.824551: 1.0163x faster Not significant Stddev: 2.80613 -> 2.76699: 1.0141x smaller ### json_bench ### Min: 3.525704 -> 3.533075: 1.0021x slower Avg: 3.563207 -> 3.577035: 1.0039x slower Not significant Stddev: 0.14274 -> 0.15135: 1.0604x larger ### meteor-contest ### Min: 0.335457 -> 0.336462: 1.0030x slower Avg: 0.343577 -> 0.344685: 1.0032x slower Not significant Stddev: 0.01572 -> 0.01579: 1.0045x larger ### nbody_modified ### Min: 0.085591 -> 0.085202: 1.0046x faster Avg: 0.087704 -> 0.087397: 1.0035x faster Not significant Stddev: 0.00601 -> 0.00633: 1.0538x larger ### pyflate-fast ### Min: 1.038416 -> 1.036536: 1.0018x faster Avg: 1.078068 -> 1.086279: 1.0076x slower Not significant Stddev: 0.02674 -> 0.02464: 1.0854x smaller ### raytrace-simple ### Min: 0.066854 -> 0.067021: 1.0025x slower Avg: 0.081450 -> 0.081594: 1.0018x slower Not significant Stddev: 0.02581 -> 0.02637: 1.0218x larger ### richards ### Min: 0.008039 -> 0.007999: 1.0050x faster Avg: 0.008915 -> 0.008894: 1.0024x faster Not significant Stddev: 0.00269 -> 0.00267: 1.0065x smaller ### rietveld ### Min: 0.243249 -> 0.245089: 1.0076x slower Avg: 0.614075 -> 0.609522: 1.0075x faster Not significant Stddev: 0.59714 -> 0.57957: 1.0303x smaller ### slowspitfire ### Min: 0.677112 -> 0.684585: 1.0110x slower Avg: 0.731244 -> 0.721026: 1.0142x faster Significant (t=2.087466, a=0.95) Stddev: 0.02165 -> 0.02700: 1.2469x larger ### spambayes ### Min: 0.151896 -> 0.151275: 1.0041x faster Avg: 0.292433 -> 0.293944: 1.0052x slower Not significant Stddev: 0.12329 -> 0.12527: 1.0161x larger ### spectral-norm ### Min: 0.028405 -> 0.028729: 1.0114x slower Avg: 0.032538 -> 0.033337: 1.0245x slower Not significant Stddev: 0.01354 -> 0.01373: 1.0144x larger ### spitfire ### Min: 9.910000 -> 10.030000: 1.0121x slower Avg: 10.039000 -> 10.147800: 1.0108x slower Significant (t=-5.090227, a=0.95) Stddev: 0.11163 -> 0.10189: 1.0957x smaller ### spitfire_cstringio ### Min: 4.920000 -> 4.920000: no change Avg: 4.955000 -> 4.956400: 1.0003x slower Not significant Stddev: 0.06225 -> 0.07056: 1.1336x larger ### sympy_expand ### Min: 1.990902 -> 2.037110: 1.0232x slower Avg: 2.497677 -> 2.532886: 1.0141x slower Not significant Stddev: 0.97914 -> 0.96325: 1.0165x smaller ### sympy_integrate ### Min: 4.294102 -> 4.395819: 1.0237x slower Avg: 6.417529 -> 6.457441: 1.0062x slower Not significant Stddev: 2.47638 -> 2.49616: 1.0080x larger ### sympy_str ### Min: 0.907993 -> 0.889144: 1.0212x faster Avg: 1.978583 -> 1.950795: 1.0142x faster Not significant Stddev: 1.11382 -> 1.09577: 1.0165x smaller ### sympy_sum ### Min: 1.349965 -> 1.286402: 1.0494x faster Avg: 1.730386 -> 1.727202: 1.0018x faster Not significant Stddev: 0.54941 -> 0.53929: 1.0188x smaller ### telco ### Min: 0.099984 -> 0.099985: 1.0000x slower Avg: 0.120782 -> 0.121002: 1.0018x slower Not significant Stddev: 0.05775 -> 0.05797: 1.0038x larger ### trans_annotate ### Raw results: [1368.2] None ### trans_rtype ### Raw results: [954.1] None ### trans_backendopt ### Raw results: [433.3] None ### trans_database ### Raw results: [586.7] None ### trans_source ### Raw results: [635.9] None ### twisted_iteration ### Min: 0.013585 -> 0.013649: 1.0047x slower Avg: 0.013755 -> 0.013809: 1.0039x slower Not significant Stddev: 0.00015 -> 0.00014: 1.0542x smaller ### twisted_names ### Min: 0.007508 -> 0.007530: 1.0030x slower Avg: 0.007914 -> 0.007958: 1.0056x slower Not significant Stddev: 0.00019 -> 0.00020: 1.0402x larger ### twisted_pb ### Min: 0.038095 -> 0.037879: 1.0057x faster Avg: 0.040128 -> 0.039598: 1.0134x faster Significant (t=2.789748, a=0.95) Stddev: 0.00101 -> 0.00089: 1.1368x smaller ### twisted_tcp ### Min: 1.128350 -> 1.122807: 1.0049x faster Avg: 1.167165 -> 1.153992: 1.0114x faster Significant (t=2.337046, a=0.95) Stddev: 0.02843 -> 0.02794: 1.0174x smaller Here is the output of "cat /proc/cpuinfo | grep cache," I have attached the full output of cpuinfo. -- - Justin Noah -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cpuinfo Type: application/octet-stream Size: 1747 bytes Desc: not available URL: From arigo at tunes.org Thu Dec 8 10:11:36 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 8 Dec 2011 10:11:36 +0100 Subject: [pypy-dev] RPython In-Reply-To: References: Message-ID: Hi, On Thu, Dec 8, 2011 at 06:30, Rinu Boney wrote: > where can i get more information on rpython ( i have already seen what is > written in the coding guide! ) ? The coding guide describes the basics, and we have a number of examples of small interpreters besides the Python interpreter of PyPy, which you can look at. There is nothing like a lengthy tutorial or an article about RPython precisely, sorry. I think that the way to go is to write your RPython program[1] in normal Python while roughly following the guidelines set in the coding guide. Then when you try to translate it you will see errors that may help refine your understanding. [1] which I assume is an interpreter of some sort; it can of course be anything, but well, for "anything", please use either plain Python or (if you really need performance) another language like C, C++, Java, etc. A bient?t, Armin. From arigo at tunes.org Thu Dec 8 10:16:31 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 8 Dec 2011 10:16:31 +0100 Subject: [pypy-dev] Clang benchmarks In-Reply-To: References: Message-ID: Hi Justin, On Thu, Dec 8, 2011 at 09:09, Justin Noah wrote: > Here are the llvm/clang build using the shadowstack gc. What do you think? 1. What is the baseline you are comparing it with? 2. I assume these are normal JITting PyPys. In that case, then for benchmarks like these, it doesn't really matter what C compiler we used, because most of the time is spent inside JIT-generated assembler anyway. A bient?t, Armin. From faassen at startifact.com Thu Dec 8 11:16:28 2011 From: faassen at startifact.com (Martijn Faassen) Date: Thu, 08 Dec 2011 11:16:28 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? Message-ID: Hi PyPy folks, I've said so before in comments on your blog, but much kudos to the project, you guys have been doing a great job! Some time ago I tried PyPy against an artificial life experiment of mine which implemented a simple stack-based language in Python and was pleasantly surprised to see that PyPy actually sped things up quite a bit - I'd heard about the JIT not being too good with interpreters. Good enough to make it a lot faster, though. :) Anyway, on to my question. I already asked it once in 2007, and didn't get a very encouraging response, but we're years down the road now and crazy difficulties is something the PyPy folks are good at, so I'll ask it again. You all are, as I understand, exploring supporting Python 3 with PyPy. Cool. Would it be possible to have an interpreter that could support both Python 2 and Python 3 modules in the same runtime? I.e, an interpreter that supports importing Python 3 code from Python 2 modules, and vice versa? It would tremendously help the Python ecosystem for such a solution to be out there, because in such a situation people can start using Python 3 codebases in Python 2, encouraging more people to port their libraries to Python 3, and people can use Python 2 codebases in Python 3, encouraging more people to start projects in Python 3. I can't stress enough how much I believe that would help the Python ecosystem! I understand that the problems involved would be decidedly non-trivial, but that has never stopped you guys before. Modules would need to declare somehow that they were Python 2 or Python 3. There'd need to be, somehow, two separate import "spaces", where Python 2 code would import from Python 2 modules by default and Python 3 code would import from Python 3 modules by default. You'd also need something explicit to allow cross imports. Python 2 objects represented in Python 3 (and vice versa) would either need to be transformed for immutable built-in objects (a Python 2 string would become a Python 3 bytes), or proxied for custom objects. I remember when I brought this up at the time that Armin Rigo suggested a few conundrums that were hard to solve. What these were exactly I don't remember anymore, so I hope Armin will chime in and remind me. :) Regards, Martijn From alex.gaynor at gmail.com Thu Dec 8 11:35:36 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 8 Dec 2011 05:35:36 -0500 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: On Thu, Dec 8, 2011 at 5:16 AM, Martijn Faassen wrote: > Hi PyPy folks, > > I've said so before in comments on your blog, but much kudos to the > project, you guys have been doing a great job! Some time ago I tried PyPy > against an artificial life experiment of mine which implemented a simple > stack-based language in Python and was pleasantly surprised to see that > PyPy actually sped things up quite a bit - I'd heard about the JIT not > being too good with interpreters. Good enough to make it a lot faster, > though. :) > > Anyway, on to my question. I already asked it once in 2007, and didn't get > a very encouraging response, but we're years down the road now and crazy > difficulties is something the PyPy folks are good at, so I'll ask it again. > > You all are, as I understand, exploring supporting Python 3 with PyPy. > Cool. > > Would it be possible to have an interpreter that could support both Python > 2 and Python 3 modules in the same runtime? I.e, an interpreter that > supports importing Python 3 code from Python 2 modules, and vice versa? > > It would tremendously help the Python ecosystem for such a solution to be > out there, because in such a situation people can start using Python 3 > codebases in Python 2, encouraging more people to port their libraries to > Python 3, and people can use Python 2 codebases in Python 3, encouraging > more people to start projects in Python 3. I can't stress enough how much I > believe that would help the Python ecosystem! > > I understand that the problems involved would be decidedly non-trivial, > but that has never stopped you guys before. > > Modules would need to declare somehow that they were Python 2 or Python 3. > There'd need to be, somehow, two separate import "spaces", where Python 2 > code would import from Python 2 modules by default and Python 3 code would > import from Python 3 modules by default. You'd also need something explicit > to allow cross imports. Python 2 objects represented in Python 3 (and vice > versa) would either need to be transformed for immutable built-in objects > (a Python 2 string would become a Python 3 bytes), or proxied for custom > objects. > > I remember when I brought this up at the time that Armin Rigo suggested a > few conundrums that were hard to solve. What these were exactly I don't > remember anymore, so I hope Armin will chime in and remind me. :) > > Regards, > > Martijn > > > > ______________________________**_________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/**mailman/listinfo/pypy-dev > I don't think it can, or should be done, here's why: Say you have a Python 2 dictionary {"a": 3}, and you pass this over to py3k land, what does it become? Logically it becomes a {b"a": 3}, two problems: a) it now has bytes for keys, which means you can't use it as a **kwargs, b) internally ints have a totally different representation, since there's now only a unified long type (this one doesn't apply to pypy as much, just throwing it out there for completeness). Python 2 fundamentally does the string type wrong, it's both the general string type for everythign, as well as the byte sequence, the result is no program has enough semantic knowledge to know what you mean by a string when you pass it over hte py3k boundary. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Thu Dec 8 13:54:14 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 08 Dec 2011 13:54:14 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Alex Gaynor, 08.12.2011 11:35: > On Thu, Dec 8, 2011 at 5:16 AM, Martijn Faassen wrote: >> Would it be possible to have an interpreter that could support both Python >> 2 and Python 3 modules in the same runtime? I.e, an interpreter that >> supports importing Python 3 code from Python 2 modules, and vice versa? > > I don't think it can, or should be done, here's why: Say you have a Python > 2 dictionary {"a": 3}, and you pass this over to py3k land, what does it > become? Logically it becomes a {b"a": 3}, two problems: a) it now has > bytes for keys, which means you can't use it as a **kwargs, b) internally > ints have a totally different representation, since there's now only a > unified long type (this one doesn't apply to pypy as much, just throwing it > out there for completeness). Python 2 fundamentally does the string type > wrong, it's both the general string type for everythign, as well as the > byte sequence, the result is no program has enough semantic knowledge to > know what you mean by a string when you pass it over hte py3k boundary. I agree that it's not trivial. However, that doesn't mean it should not be done. Take Cython as the obvious example. It compiles your Py2 or Py3 code once, and it will then have to run both environments. It has to pull a couple of tricks to make this generally work (some of which mimic the 2to3 tool), and even then, there are differences that leak into the code, some of which you mention above. So the code also has to be written in a way that avoids ambiguities and works around the differences that cannot be covered at compile time. Cython actually has three string types: bytes, str and unicode. The first and last are identical in Py2 and Py3, whereas str is bytes in Py2 and Py3. That makes it possible to explicitly write code for both environments, although it does not help much with the typical legacy code that uses str for everything. At least some of the problems would, however, go away if both runtimes were available at the same time. For example, builtins could behave differently based on the semantics of the source code that uses them, simply by using different builtins modules for both kinds of source code. That alone would go a rather long way. So, assuming that Martijn was actually referring to legacy code, I agree that it won't solve the problem at hand. But that doesn't mean it would be impossible to benefit from it. Stefan From henrymitchon at postaweb.com Thu Dec 8 16:43:28 2011 From: henrymitchon at postaweb.com (Henry Mithchell) Date: Thu, 8 Dec 2011 16:43:28 +0100 Subject: [pypy-dev] {Positioning- URL - Review} Message-ID: <46141689.20111208164328@postaweb.com> pypy-dev at codespeak.net : Your web site is really good but you could be missing out on a lot of online business because of where your site shows up on the major search directories. A few simple changes could greatly increase your web traffic and your bottom line. Reply to us and we will give you a free analysis of your site and show you what will make the difference for your business. Include the best way to reach you with the results. Sincerely, Henry Mitchell -------------- next part -------------- An HTML attachment was scrubbed... URL: From faassen at startifact.com Thu Dec 8 20:07:35 2011 From: faassen at startifact.com (Martijn Faassen) Date: Thu, 08 Dec 2011 20:07:35 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hey, On 12/08/2011 11:35 AM, Alex Gaynor wrote: > I don't think it can, or should be done, here's why: Say you have a > Python 2 dictionary {"a": 3}, and you pass this over to py3k land, what > does it become? Logically it becomes a {b"a": 3}, two problems: a) it > now has bytes for keys, which means you can't use it as a **kwargs, b) > internally ints have a totally different representation, since there's > now only a unified long type (this one doesn't apply to pypy as much, > just throwing it out there for completeness). Python 2 fundamentally > does the string type wrong, it's both the general string type for > everythign, as well as the byte sequence, the result is no program has > enough semantic knowledge to know what you mean by a string when you > pass it over hte py3k boundary. You could come up with a rule for that amended by some annotations. If you get a Python 2 string type, that'd be bytes in Python 3. Unless there's some form of annotation somewhere (handwave) that says otherwise to the proxying layer. Many Python 2 APIs do unicode fairly well actually. There is still a problematic situation if the API returns a str type if the content is plain ascii: elementtree does this for instance, but no problem passing in unicode objects from Python 3. In that case I guess the only hope is some kind of annotation system. It'd be like a native language binding problem: the problem would probably be less big than usual given that the languages are very similar. Regards, Martijn From faassen at startifact.com Thu Dec 8 20:12:23 2011 From: faassen at startifact.com (Martijn Faassen) Date: Thu, 08 Dec 2011 20:12:23 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hey Stefan, On 12/08/2011 01:54 PM, Stefan Behnel wrote: > So, assuming that Martijn was actually referring to legacy code, I agree > that it won't solve the problem at hand. But that doesn't mean it would > be impossible to benefit from it. Not sure what you mean here - I was referring to using existing Python 2 code in Python 3 projects, and vice versa. The Cython experience is definitely interesting: it's good to know there is some experience with this problem. My hope is that you can come up with some sensible rules and fix the rest with a few annotations (in a separate module informing the rest, or included in the codebase itself) for the ambiguous cases. If there's any native language that it should be easy to generate bindings for in Python it should be Python, right? :) Anyway, separate from the binding, I'm wondering about the challenge of two Pythons in one runtime - how hard would it be to generate this with the current PyPy? (imagining two different Python interpreters were available) Regards, Martijn From justinnoah at gmail.com Fri Dec 9 05:25:09 2011 From: justinnoah at gmail.com (Justin Noah) Date: Thu, 8 Dec 2011 20:25:09 -0800 Subject: [pypy-dev] Clang benchmarks In-Reply-To: References: Message-ID: 1. I had just run 'pypy ./runner.py' not knowing what I was actually supposed to do. I have since re-run the benchmarks, after discussing how they should properly be run, and came up with the results below. 2. These are "normal" JiTting PyPys, however one is using the asmgcc gc where as the other is using the shadowstack gc. I thought it might be interesting to see if any significant info can be gathered from this benchmark (something. I compared my llvm/clang build to the prebuilt 64bit Linux binary provided on the PyPy downloads page. pypy ./runner.py --baseline=/usr/bin/pypy -p /home/chaos/pypy-1.7/bin/pypy The baseline being the clang build of pypy. Here are the results from stdout: Wrote /home/chaos/Desktop/benchmarks/lib/pypy/lib_pypy/ctypes_config_cache/_pyexpat_x86_64_.py. Wrote /home/chaos/Desktop/benchmarks/lib/pypy/lib_pypy/ctypes_config_cache/_locale_x86_64_.py. Wrote /home/chaos/Desktop/benchmarks/lib/pypy/lib_pypy/ctypes_config_cache/_resource_x86_64_.py. Wrote /home/chaos/Desktop/benchmarks/lib/pypy/lib_pypy/ctypes_config_cache/_syslog_x86_64_.py. Running ai... Running bm_chameleon... Running bm_mako... Running chaos... Running crypto_pyaes... Running django... Running fannkuch... Running float... Running go... Running html5lib... Running json_bench... Running meteor-contest... Running nbody_modified... Running pyflate-fast... Running raytrace-simple... Running richards... Running rietveld... Running slowspitfire... Running spambayes... Running spectral-norm... Running spitfire... Running spitfire_cstringio... Running sympy_expand... Running sympy_integrate... Running sympy_str... Running sympy_sum... Running telco... Running translate... Running twisted_iteration... Running twisted_names... Running twisted_pb... Running twisted_tcp... Report on Linux infinity 3.1.1-gentoo #1 SMP PREEMPT Sun Nov 20 03:57:03 PST 2011 x86_64 AMD Turion(tm) II Dual-Core Mobile M520 Total CPU cores: 2 ### ai ### Min: 0.139957 -> 0.134342: 1.0418x faster Avg: 0.156932 -> 0.151293: 1.0373x faster Not significant Stddev: 0.01776 -> 0.01588: 1.1185x smaller ### bm_chameleon ### Min: 0.051917 -> 0.050041: 1.0375x faster Avg: 0.064268 -> 0.060951: 1.0544x faster Not significant Stddev: 0.01868 -> 0.01755: 1.0643x smaller ### bm_mako ### Min: 0.146432 -> 0.136968: 1.0691x faster Avg: 0.164050 -> 0.153924: 1.0658x faster Not significant Stddev: 0.02840 -> 0.02575: 1.1028x smaller ### chaos ### Min: 0.024366 -> 0.025635: 1.0521x slower Avg: 0.037429 -> 0.038286: 1.0229x slower Not significant Stddev: 0.07200 -> 0.07000: 1.0286x smaller ### crypto_pyaes ### Min: 0.121551 -> 0.133157: 1.0955x slower Avg: 0.136914 -> 0.147496: 1.0773x slower Not significant Stddev: 0.05749 -> 0.05621: 1.0229x smaller ### django ### Min: 0.101975 -> 0.095751: 1.0650x faster Avg: 0.110838 -> 0.104973: 1.0559x faster Significant (t=3.146148, a=0.95) Stddev: 0.00960 -> 0.00904: 1.0616x smaller ### fannkuch ### Min: 0.511440 -> 0.446432: 1.1456x faster Avg: 0.521307 -> 0.451537: 1.1545x faster Significant (t=23.373959, a=0.95) Stddev: 0.01503 -> 0.01482: 1.0138x smaller ### float ### Min: 0.083709 -> 0.082221: 1.0181x faster Avg: 0.106045 -> 0.102772: 1.0318x faster Not significant Stddev: 0.01608 -> 0.01566: 1.0271x smaller ### go ### Min: 0.269653 -> 0.263512: 1.0233x faster Avg: 0.481460 -> 0.468852: 1.0269x faster Not significant Stddev: 0.19745 -> 0.19127: 1.0323x smaller ### html5lib ### Min: 5.790741 -> 5.692144: 1.0173x faster Avg: 7.893078 -> 7.687743: 1.0267x faster Not significant Stddev: 2.83904 -> 2.67228: 1.0624x smaller ### json_bench ### Min: 3.498221 -> 3.506862: 1.0025x slower Avg: 3.541327 -> 3.546107: 1.0013x slower Not significant Stddev: 0.14921 -> 0.13440: 1.1102x smaller ### meteor-contest ### Min: 0.338499 -> 0.313130: 1.0810x faster Avg: 0.346351 -> 0.320863: 1.0794x faster Significant (t=8.045312, a=0.95) Stddev: 0.01531 -> 0.01635: 1.0676x larger ### nbody_modified ### Min: 0.084227 -> 0.083003: 1.0147x faster Avg: 0.086098 -> 0.085613: 1.0057x faster Not significant Stddev: 0.00588 -> 0.00667: 1.1353x larger ### pyflate-fast ### Min: 1.049249 -> 1.025093: 1.0236x faster Avg: 1.099029 -> 1.069142: 1.0280x faster Significant (t=5.351335, a=0.95) Stddev: 0.02828 -> 0.02756: 1.0262x smaller ### raytrace-simple ### Min: 0.067093 -> 0.068943: 1.0276x slower Avg: 0.081748 -> 0.082917: 1.0143x slower Not significant Stddev: 0.02475 -> 0.02978: 1.2031x larger ### richards ### Min: 0.008263 -> 0.008141: 1.0150x faster Avg: 0.009398 -> 0.009126: 1.0298x faster Not significant Stddev: 0.00345 -> 0.00286: 1.2085x smaller ### rietveld ### Min: 0.243964 -> 0.241945: 1.0083x faster Avg: 0.612855 -> 0.594768: 1.0304x faster Not significant Stddev: 0.58823 -> 0.56197: 1.0467x smaller ### slowspitfire ### Min: 0.643957 -> 0.624110: 1.0318x faster Avg: 0.671090 -> 0.654331: 1.0256x faster Significant (t=3.642100, a=0.95) Stddev: 0.02335 -> 0.02265: 1.0310x smaller ### spambayes ### Min: 0.153035 -> 0.147505: 1.0375x faster Avg: 0.294454 -> 0.281956: 1.0443x faster Not significant Stddev: 0.12360 -> 0.11756: 1.0514x smaller ### spectral-norm ### Min: 0.028271 -> 0.029005: 1.0260x slower Avg: 0.032468 -> 0.033037: 1.0175x slower Not significant Stddev: 0.01273 -> 0.01247: 1.0204x smaller ### spitfire ### Min: 9.720000 -> 9.230000: 1.0531x faster Avg: 9.817600 -> 9.385200: 1.0461x faster Significant (t=16.544288, a=0.95) Stddev: 0.10499 -> 0.15209: 1.4486x larger ### spitfire_cstringio ### Min: 4.840000 -> 4.710000: 1.0276x faster Avg: 4.863400 -> 4.748400: 1.0242x faster Significant (t=8.894143, a=0.95) Stddev: 0.05294 -> 0.07454: 1.4081x larger ### sympy_expand ### Min: 2.006796 -> 1.881002: 1.0669x faster Avg: 2.503000 -> 2.402378: 1.0419x faster Not significant Stddev: 0.97556 -> 0.93094: 1.0479x smaller ### sympy_integrate ### Min: 4.551186 -> 4.328435: 1.0515x faster Avg: 6.496911 -> 6.308380: 1.0299x faster Not significant Stddev: 2.48643 -> 2.42822: 1.0240x smaller ### sympy_str ### Min: 0.896834 -> 1.034409: 1.1534x slower Avg: 1.944165 -> 1.933199: 1.0057x faster Not significant Stddev: 1.08961 -> 1.02921: 1.0587x smaller ### sympy_sum ### Min: 1.292385 -> 1.256345: 1.0287x faster Avg: 1.729903 -> 1.671316: 1.0351x faster Not significant Stddev: 0.53698 -> 0.50304: 1.0675x smaller ### telco ### Min: 0.098984 -> 0.093985: 1.0532x faster Avg: 0.119122 -> 0.113263: 1.0517x faster Not significant Stddev: 0.05782 -> 0.05571: 1.0379x smaller ### trans_annotate ### Raw results: [1377.2] None ### trans_rtype ### Raw results: [955.9] None ### trans_backendopt ### Raw results: [421.4] None ### trans_database ### Raw results: [568.0] None ### trans_source ### Raw results: [615.2] None ### twisted_iteration ### Min: 0.014090 -> 0.014660: 1.0404x slower Avg: 0.014315 -> 0.014838: 1.0366x slower Significant (t=-18.280629, a=0.95) Stddev: 0.00015 -> 0.00014: 1.0274x smaller ### twisted_names ### Min: 0.007479 -> 0.007457: 1.0030x faster Avg: 0.007976 -> 0.007883: 1.0118x faster Significant (t=2.202431, a=0.95) Stddev: 0.00023 -> 0.00018: 1.2883x smaller ### twisted_pb ### Min: 0.037453 -> 0.039920: 1.0659x slower Avg: 0.039451 -> 0.041383: 1.0490x slower Significant (t=-10.352263, a=0.95) Stddev: 0.00103 -> 0.00082: 1.2537x smaller ### twisted_tcp ### Min: 1.113818 -> 1.192250: 1.0704x slower Avg: 1.138590 -> 1.222393: 1.0736x slower Significant (t=-16.106458, a=0.95) Stddev: 0.02450 -> 0.02745: 1.1207x larger On Thu, Dec 8, 2011 at 01:16, Armin Rigo wrote: > Hi Justin, > > On Thu, Dec 8, 2011 at 09:09, Justin Noah wrote: > > Here are the llvm/clang build using the shadowstack gc. What do you > think? > > 1. What is the baseline you are comparing it with? > > 2. I assume these are normal JITting PyPys. In that case, then > for benchmarks like these, it doesn't really matter what C compiler we > used, because most of the time is spent inside JIT-generated assembler > anyway. > > > A bient?t, > > Armin. > -- - Justin Noah -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Dec 9 12:31:33 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 9 Dec 2011 12:31:33 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hi Martijn, On Thu, Dec 8, 2011 at 20:12, Martijn Faassen wrote: > Anyway, separate from the binding, I'm wondering about the challenge of two > Pythons in one runtime - how hard would it be to generate this with the > current PyPy? (imagining two different Python interpreters were available) Getting two completely separate interpreters in one process is trivial in PyPy, quite unlike CPython (blame C). This works even with the GC (you get one for both) and the JIT (you get one JIT that can "meta-trace" starting from either interpreter). That means that maybe PyPy is better than CPython as a starting point. Of course someone still has a serious amount of work to do to integrate the two interpreters to the point where it becomes something more than, well, two unrelated interpreters in one process. [ More generally, you can translate several interpreters for different languages and getting a single unified JIT, capable of inlining across languages. It is an interesting challenge that explores some unique benefits of PyPy --- worth a paper, most probably :-) ] A bient?t, Armin. From faassen at startifact.com Fri Dec 9 13:09:53 2011 From: faassen at startifact.com (Martijn Faassen) Date: Fri, 9 Dec 2011 13:09:53 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hey Armin, Thanks for your response! On Fri, Dec 9, 2011 at 12:31 PM, Armin Rigo wrote: > On Thu, Dec 8, 2011 at 20:12, Martijn Faassen wrote: >> Anyway, separate from the binding, I'm wondering about the challenge of two >> Pythons in one runtime - how hard would it be to generate this with the >> current PyPy? (imagining two different Python interpreters were available) > > Getting two completely separate interpreters in one process is trivial > in PyPy, quite unlike CPython (blame C). ?This works even with the GC > (you get one for both) and the JIT (you get one JIT that can > "meta-trace" starting from either interpreter). ?That means that maybe > PyPy is better than CPython as a starting point. Most definitely, I'd say! > Of course someone > still has a serious amount of work to do to integrate the two > interpreters to the point where it becomes something more than, well, > two unrelated interpreters in one process. Yes, that's the challenge. But that sounds like a feasible challenge - PyPy also has support for transparent proxies, so my hope is that you could write a Python 2 to Python 3 object proxy and one that does it the other way around. If you have that and some import namespace separation hackery you'd already be quite far. > [ More generally, you can translate several interpreters for different > languages and getting a single unified JIT, capable of inlining across > languages. ?It is an interesting challenge that explores some unique > benefits of PyPy --- worth a paper, most probably :-) ] That is indeed a major benefit. It would allow a much more seamless upgrade process between multiple incompatible versions of programming languages. In the Python 2 and Python 3 case, thanks to the unified JIT you might pay only very little penalty for the integration, depending on how the JIT and proxies would interact. Of course all this would have to wait for a Python 3 implementation in PyPy too. I was unenthusiastic about that before, as I believe PyPy is in a unique competitive position right now compared to CPython as an actively developed Python 2 implementation. But having one would make this integration possible, which I believe would solve endless issues the community has now with the discontinuity between the two languages. In the mean time it should be possible to experiment with just two Python 2 interpreters in the same process space, and work on a system to share modules between them. It would not provide any real benefit to just a single Python 2 as far as I can tell, but it could serve as a proof of concept. Is anyone interested in working with me on the latter experiment? Regards, Martijn From fuzzyman at gmail.com Fri Dec 9 14:10:25 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Fri, 9 Dec 2011 13:10:25 +0000 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: On 9 December 2011 11:31, Armin Rigo wrote: > Hi Martijn, > > On Thu, Dec 8, 2011 at 20:12, Martijn Faassen > wrote: > > Anyway, separate from the binding, I'm wondering about the challenge of > two > > Pythons in one runtime - how hard would it be to generate this with the > > current PyPy? (imagining two different Python interpreters were > available) > > Getting two completely separate interpreters in one process is trivial > in PyPy, quite unlike CPython (blame C). This works even with the GC > (you get one for both) and the JIT (you get one JIT that can > "meta-trace" starting from either interpreter). That means that maybe > PyPy is better than CPython as a starting point. Of course someone > still has a serious amount of work to do to integrate the two > interpreters to the point where it becomes something more than, well, > two unrelated interpreters in one process. > > [ More generally, you can translate several interpreters for different > languages and getting a single unified JIT, capable of inlining across > languages. It is an interesting challenge that explores some unique > benefits of PyPy --- worth a paper, most probably :-) ] > The .NET Dynamic Language Runtime does this for IronPython / IronRuby - allowing multiple interpreters (for the same or different language engines) in the same process. An object from one interpreter can be used by the other, but retains its original semantics - so looking up attributes (or any other operation) calls back into the interpreter that owns the object to perform it. (This is a fundamental property of the way the DLR works, so it mostly came for "free". It's also how a .NET string can behave as a Python string for example.) This allows IronPython and IronRuby to interoperate - but Ruby objects retain their Ruby semantics even when used from Python (and vice versa). Resolver Systems used this a great deal for sharing objects between multiple Python interpreters - all for the same version of Python of course though. IronPython doesn't have the particular problem with strings though - under the hood all strings are .NET strings, but with different behaviour layered on top. (Even Python 2 byte-strings are .NET Unicode strings under the hood, with lots of magic to make that work.) All the best, Michael Foord > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Fri Dec 9 14:36:56 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 9 Dec 2011 14:36:56 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: 2011/12/9 Armin Rigo > Getting two completely separate interpreters in one process is trivial > in PyPy > Well, not so trivial; I played with this idea last evening. A few lines in targetpypystandalone.py to install a new objspace, and tried to translate... Here are the issues I encountered so far: - Builtin functions are created twice, yet the global registry (Function.find) only use the function name. I added "space" to the key. Maybe Builtin Functions should avoid to use the objspace entirely, I almost managed to do it except for the default values of arguments. - a couple of issues in modules, easily fixed. - now, I'm fighting with multimethods, which are added to the space *instance*: they fail when MethodOfFrozenPBCRepr sees different implementations for the same dispatcher (e.g. __mm_float_w_0_perform_call). I tried to add annspecialize.arg(0), but it still fails to translate. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From faassen at startifact.com Fri Dec 9 15:59:36 2011 From: faassen at startifact.com (Martijn Faassen) Date: Fri, 9 Dec 2011 15:59:36 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hey, On Fri, Dec 9, 2011 at 2:36 PM, Amaury Forgeot d'Arc wrote: > 2011/12/9 Armin Rigo >> >> Getting two completely separate interpreters in one process is trivial >> in PyPy > Well, not so trivial; I played with this idea last evening. Awesome! I'm glad someone who actually sounds like he knows what he's doing is trying this. :) > A few lines in targetpypystandalone.py to install a new objspace, > and tried to translate... > Here are the issues I?encountered so far: > - Builtin functions are created twice, yet the global registry > (Function.find) only use the function name. > I added "space" to the key. Maybe Builtin Functions should avoid to use the > objspace entirely, I almost > managed to do it except for the default values of arguments. > - a couple of issues in modules, easily fixed. > - now, I'm fighting with multimethods, which are added to the space > *instance*: they fail when?MethodOfFrozenPBCRepr sees different > implementations for the same?dispatcher?(e.g. __mm_float_w_0_perform_call). > I tried to add annspecialize.arg(0), but it still fails to translate. Is there a branch or something where I could try this out? Maybe I'll find a bit of time to mess around with this. Regards, Martijn From bokr at oz.net Fri Dec 9 16:18:15 2011 From: bokr at oz.net (Bengt Richter) Date: Fri, 09 Dec 2011 16:18:15 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: On 12/09/2011 01:09 PM Martijn Faassen wrote: > Hey Armin, > > Thanks for your response! > > On Fri, Dec 9, 2011 at 12:31 PM, Armin Rigo wrote: > >> On Thu, Dec 8, 2011 at 20:12, Martijn Faassen wrote: >>> Anyway, separate from the binding, I'm wondering about the challenge of two >>> Pythons in one runtime - how hard would it be to generate this with the >>> current PyPy? (imagining two different Python interpreters were available) >> >> Getting two completely separate interpreters in one process is trivial >> in PyPy, quite unlike CPython (blame C). This works even with the GC >> (you get one for both) and the JIT (you get one JIT that can >> "meta-trace" starting from either interpreter). That means that maybe >> PyPy is better than CPython as a starting point. > > Most definitely, I'd say! > >> Of course someone >> still has a serious amount of work to do to integrate the two >> interpreters to the point where it becomes something more than, well, >> two unrelated interpreters in one process. > > Yes, that's the challenge. But that sounds like a feasible challenge - > PyPy also has support for transparent proxies, so my hope is that you > could write a Python 2 to Python 3 object proxy and one that does it > the other way around. If you have that and some import namespace > separation hackery you'd already be quite far. > >> [ More generally, you can translate several interpreters for different >> languages and getting a single unified JIT, capable of inlining across >> languages. It is an interesting challenge that explores some unique >> benefits of PyPy --- worth a paper, most probably :-) ] > > That is indeed a major benefit. It would allow a much more seamless > upgrade process between multiple incompatible versions of programming > languages. In the Python 2 and Python 3 case, thanks to the unified > JIT you might pay only very little penalty for the integration, > depending on how the JIT and proxies would interact. > > Of course all this would have to wait for a Python 3 implementation in > PyPy too. I was unenthusiastic about that before, as I believe PyPy is > in a unique competitive position right now compared to CPython as an > actively developed Python 2 implementation. But having one would make > this integration possible, which I believe would solve endless issues > the community has now with the discontinuity between the two > languages. > > In the mean time it should be possible to experiment with just two > Python 2 interpreters in the same process space, and work on a system > to share modules between them. It would not provide any real benefit > to just a single Python 2 as far as I can tell, but it could serve as > a proof of concept. > > Is anyone interested in working with me on the latter experiment? > > Regards, > > Martijn PMJI, but I was thinking that if you had both py272 and py3k modules it would seem natural to keep them in separate directories, so I was wondering if __init__.py could do some useful work in effecting translations containing both kinds of sources? And maybe a command line option (-py2x or -py3x) could tell what kind of main to expect? Polyglot packages? ;-) Maybe __init__.py could also invoke some kind of annotation ("handwave" ;-) file processing? Regards, Bengt Richter From stefan_ml at behnel.de Fri Dec 9 16:23:29 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 09 Dec 2011 16:23:29 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Martijn Faassen, 08.12.2011 20:12: > On 12/08/2011 01:54 PM, Stefan Behnel wrote: >> So, assuming that Martijn was actually referring to legacy code, I agree >> that it won't solve the problem at hand. But that doesn't mean it would >> be impossible to benefit from it. > > Not sure what you mean here - I was referring to using existing Python 2 > code in Python 3 projects, and vice versa. Sure, and I meant that you'd probably still have to modify (or rather fix) the code in order to make the integration work, if only to make it use proper bytes or unicode strings where data or text is passed into the other environment. The runtime can only provide a default adaptation here, and even if the string type behaviour is completely inherited from the source environment, you could still run into the Py2-UnicodeDecodeError hell on the other side when passing around text in encoded byte strings "because it worked in my local Py2 installation". > The Cython experience is definitely interesting: it's good to know there is > some experience with this problem. My hope is that you can come up with > some sensible rules and fix the rest with a few annotations (in a separate > module informing the rest, or included in the codebase itself) for the > ambiguous cases. It would definitely require a bit of "six" foo, but most likely less than without the second Python environment because the code only needs bug fixing, not adapting to new (builtin) semantics and syntax. Stefan From arigo at tunes.org Fri Dec 9 16:34:20 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 9 Dec 2011 16:34:20 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hi Amaury, On Fri, Dec 9, 2011 at 14:36, Amaury Forgeot d'Arc wrote: >> Getting two completely separate interpreters in one process is trivial >> in PyPy > > Well, not so trivial; I played with this idea last evening. > A few lines in targetpypystandalone.py to install a new objspace, > and tried to translate... No, I meant in this case something else: getting two *different* interpreters from different sources. Like PyPy and Pyrolog, or like what you would get if you imported in the same Python both the "default" and the "py3k" branch (with the proper amount of import hacks to keep them from conflicting). A bient?t, Armin. From amauryfa at gmail.com Fri Dec 9 16:50:23 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 9 Dec 2011 16:50:23 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: 2011/12/9 Armin Rigo > Hi Amaury, > > On Fri, Dec 9, 2011 at 14:36, Amaury Forgeot d'Arc > wrote: > >> Getting two completely separate interpreters in one process is trivial > >> in PyPy > > > > Well, not so trivial; I played with this idea last evening. > > A few lines in targetpypystandalone.py to install a new objspace, > > and tried to translate... > > No, I meant in this case something else: getting two *different* > interpreters from different sources. Like PyPy and Pyrolog, or like > what you would get if you imported in the same Python both the > "default" and the "py3k" branch (with the proper amount of import > hacks to keep them from conflicting). I was thinking of two object spaces with different config options, like "config.objspace.std.py3k", if we take the other route and have a single branch to support both versions. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From faassen at startifact.com Fri Dec 9 17:53:38 2011 From: faassen at startifact.com (Martijn Faassen) Date: Fri, 9 Dec 2011 17:53:38 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hi there, On Fri, Dec 9, 2011 at 4:18 PM, Bengt Richter wrote: > PMJI, but I was thinking that if you had both py272 and py3k modules it > would seem > natural to keep them in separate directories, True, I imagine you'd normally give them different module names. The standard library however is a clear example where there is potential for overlap. That's why I thought it would be simpler to have separate module spaces entirely and have an import2() in Python 3 and an import3() in Python 2 to import from the other place. I.e. baz= import3('foo.bar', 'baz') as the equivalent of a 'from... import'. You'd still need a way for the system to know which modules and packages are Python 2 and which are in Python 3. I imagine they'd share the same PYTHONPATH. I figure some kind of global dictionary with this information, and various ways to fill it (with a module-level declaration, a package level declaration in __init__, or completely externally through an API for the case when the library itself makes no such declaration - the latter would be the most compatible). If done on a package level, combined with a default setting, the burden shouldn't be too onerous for the developer. Regards, Martijn From faassen at startifact.com Fri Dec 9 17:57:30 2011 From: faassen at startifact.com (Martijn Faassen) Date: Fri, 9 Dec 2011 17:57:30 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hi there, Armin: > like what you would get if you imported in the same Python both the > "default" and the "py3k" branch (with the proper amount of import > hacks to keep them from conflicting). On Fri, Dec 9, 2011 at 4:50 PM, Amaury Forgeot d'Arc wrote: > I was thinking of two object spaces with different config options, > like "config.objspace.std.py3k", > if we take the other route and have a single branch to support both > versions. Oh, I was assuming that PyPy would have one branch supporting both python 2 and python 3. Is that currently not the case? What are the reasons behind not separating the two implementations in different packages? Regards, Martijn From arigo at tunes.org Fri Dec 9 18:27:11 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 9 Dec 2011 18:27:11 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hi, On Fri, Dec 9, 2011 at 17:57, Martijn Faassen wrote: > Oh, I was assuming that PyPy would have one branch supporting both > python 2 and python 3. Is that currently not the case? What are the > reasons behind not separating the two implementations in different > packages? I think it was not deeply thought out, but was just the simplest route to take. It does give us (mostly for free) a feature that is completely essential to have: continuously merging the "default" changes in "py3k". Otherwise it would be a mess that would end up with py3k not including all the more recent Python fixes and performance improvements we keep doing. A bient?t, Armin. From alexgolecmailinglists at gmail.com Fri Dec 9 22:19:18 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 16:19:18 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? Message-ID: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> Hey all, I have the following conception of the RPython toolchain: - RPython translator is written in RPython, meaning it can be interpreted by CPython, and then run as a standalone executable once translated - The translator runs the standard interpreter on the input program. - The input program has some marker to say 'alright, I'm initialized, begin translating,' at which point execution stops and RPython starts - RPython kicks in, performs flow object space analysis on the initialized code objects. - Type annotation occurs based on the initialized types - The annotated code is passed down to the backend-facing code generator Is this correct? Do I have a correct basic understanding of what happens where? Alex From arigo at tunes.org Fri Dec 9 23:29:17 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 9 Dec 2011 23:29:17 +0100 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> Message-ID: Hi Alexander, The RPython toolchain is meant to be useful essentially for people that want to implement an interpreter for a dynamic language, or that want to hack at PyPy itself. If you're not in either category, you probably only want to use the "pypy" executable, in the same way as you use the "python" executable from CPython. To answer your first question: the RPython toolchain is not written in RPython, but in normal Python. We only run it once, in order to build the "pypy" executable --- similar to how the "python" executable is built once using a C compiler like gcc. (As the RPython translation is written in normal Python code, it runs with either "python" or a previous version of the "pypy" executable; nowadays we tend to use pypy for that because it is faster, but the result is the same.) (Did this help, or did I miss the point of your questions?... if so, I'm sorry) A bient?t, Armin. From alexgolecmailinglists at gmail.com Fri Dec 9 23:36:13 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 17:36:13 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> Message-ID: Hi Armin, Thanks for the answer, but I'm having a hard time wrapping my head around this. I'm reading about object spaces, and I keep wondering how do the object spaces find their way into the CPython interpreter. In other words, since there is an element of partial initialization to the translation process, what performs the initialization, and how does the initialized product find its way into the translator? My current understanding is: - Interpret PyPy on top of CPython - The interpreted PyPy performs partial interpretation of its own source code until it reaches the RPython entry point - From that point forward, the code are handled by the flow object space to produce code objects. - The annotator runs and translation happens Is this correct? I've been spending my entire semester trying to grasp this, but I can't seem to get the totality of it into my head. Alex On Dec 9, 2011, at 5:29 PM, Armin Rigo wrote: > Hi Alexander, > > The RPython toolchain is meant to be useful essentially for people > that want to implement an interpreter for a dynamic language, or that > want to hack at PyPy itself. If you're not in either category, you > probably only want to use the "pypy" executable, in the same way as > you use the "python" executable from CPython. > > To answer your first question: the RPython toolchain is not written in > RPython, but in normal Python. We only run it once, in order to build > the "pypy" executable --- similar to how the "python" executable is > built once using a C compiler like gcc. (As the RPython translation > is written in normal Python code, it runs with either "python" or a > previous version of the "pypy" executable; nowadays we tend to use > pypy for that because it is faster, but the result is the same.) > > (Did this help, or did I miss the point of your questions?... if so, I'm sorry) > > > A bient?t, > > Armin. From faassen at startifact.com Fri Dec 9 23:55:13 2011 From: faassen at startifact.com (Martijn Faassen) Date: Fri, 9 Dec 2011 23:55:13 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hey, On Fri, Dec 9, 2011 at 6:27 PM, Armin Rigo wrote: > On Fri, Dec 9, 2011 at 17:57, Martijn Faassen wrote: >> Oh, I was assuming that PyPy would have one branch supporting both >> python 2 and python 3. Is that currently not the case? What are the >> reasons behind not separating the two implementations in different >> packages? > > I think it was not deeply thought out, but was just the simplest route > to take. ?It does give us (mostly for free) a feature that is > completely essential to have: continuously merging the "default" > changes in "py3k". ?Otherwise it would be a mess that would end up > with py3k not including all the more recent Python fixes and > performance improvements we keep doing. Yes, I figured merging would be the main benefit, that makes sense. Hopefully it can be made to work in parallel with the Python 2 implementation eventually though. Regards, Martijn From arigo at tunes.org Sat Dec 10 00:20:25 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 10 Dec 2011 00:20:25 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hi Martijn, On Fri, Dec 9, 2011 at 23:55, Martijn Faassen wrote: > Yes, I figured merging would be the main benefit, that makes sense. > Hopefully it can be made to work in parallel with the Python 2 > implementation eventually though. After all, all you need is "hg" and a bit (or a lot) of hacking and you can translate by reading modules from both branches... :-) Armin From faassen at startifact.com Sat Dec 10 00:25:53 2011 From: faassen at startifact.com (Martijn Faassen) Date: Sat, 10 Dec 2011 00:25:53 +0100 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: Hey, On Sat, Dec 10, 2011 at 12:20 AM, Armin Rigo wrote: > On Fri, Dec 9, 2011 at 23:55, Martijn Faassen wrote: >> Yes, I figured merging would be the main benefit, that makes sense. >> Hopefully it can be made to work in parallel with the Python 2 >> implementation eventually though. > > After all, all you need is "hg" and a bit (or a lot) of hacking and > you can translate by reading modules from both branches... :-) The 'lot of' bit is what worries me. :) Regards, Martijn From benjamin at python.org Sat Dec 10 01:00:41 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 19:00:41 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > Hi Armin, > > Thanks for the answer, but I'm having a hard time wrapping my head around this. I'm reading about object spaces, and I keep wondering how do the object spaces find their way into the CPython interpreter. In other words, since there is an element of partial initialization to the translation process, what performs the initialization, and how does the initialized product find its way into the translator? > > My current understanding is: > > ?- Interpret PyPy on top of CPython > ?- The interpreted PyPy performs partial interpretation of its own source code until it reaches the RPython entry point No, the program is loaded into the process which is performing the translation; this will be a translated PyPy or CPython. Then the flow objspace starts building flow graphs from an entry point defined by the program to translate. > ?- From that point forward, the code are handled by the flow object space to produce code objects. > ?- The annotator runs and translation happens Technically, annotation drives flow graph creation, but yes. > > Is this correct? I've been spending my entire semester trying to grasp this, but I can't seem to get the totality of it into my head. -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sat Dec 10 01:14:26 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 19:14:26 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> Message-ID: <8ACB33EB-395D-4D73-B091-A6E431EBE20E@gmail.com> I guess that's bugging me is where does the flow object space live? It is in RPython or the PyPy interpreter? Also, does RPython exist as a standalone thing, i.e. can I say 'rpython something.py' and get a translation? Alex On Dec 9, 2011, at 7:00 PM, Benjamin Peterson wrote: > 2011/12/9 Alexander Golec : >> Hi Armin, >> >> Thanks for the answer, but I'm having a hard time wrapping my head around this. I'm reading about object spaces, and I keep wondering how do the object spaces find their way into the CPython interpreter. In other words, since there is an element of partial initialization to the translation process, what performs the initialization, and how does the initialized product find its way into the translator? >> >> My current understanding is: >> >> - Interpret PyPy on top of CPython >> - The interpreted PyPy performs partial interpretation of its own source code until it reaches the RPython entry point > > No, the program is loaded into the process which is performing the > translation; this will be a translated PyPy or CPython. Then the flow > objspace starts building flow graphs from an entry point defined by > the program to translate. > >> - From that point forward, the code are handled by the flow object space to produce code objects. >> - The annotator runs and translation happens > > Technically, annotation drives flow graph creation, but yes. > >> >> Is this correct? I've been spending my entire semester trying to grasp this, but I can't seem to get the totality of it into my head. > > > > > -- > Regards, > Benjamin From benjamin at python.org Sat Dec 10 01:17:25 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 19:17:25 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: <8ACB33EB-395D-4D73-B091-A6E431EBE20E@gmail.com> References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> <8ACB33EB-395D-4D73-B091-A6E431EBE20E@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > I guess that's bugging me is where does the flow object space live? It is in RPython or the PyPy interpreter? The flow space is not RPython. It's in pypy/objspace/flow > Also, does RPython exist as a standalone thing, i.e. can I say 'rpython something.py' and get a translation? Yes and no. You have to create a target*.py . Look in pypy/translator/goal for examples. -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sat Dec 10 01:25:16 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 19:25:16 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> <8ACB33EB-395D-4D73-B091-A6E431EBE20E@gmail.com> Message-ID: So the RPython compiler reuses interpreter code from PyPy to perform bytecode interpretation on the objects that the were given it by the standard CPython dis module? Namely I'm thinking of the bits that handle branching code, where the interpreter must effectively take both branches at once. If I'm not mistaken, the PyPy VM uses a superset of the CPython bytecodes. Does that mean that RPython can perform translation either on bytecodes emitted by CPython or by PyPy? Alex On Dec 9, 2011, at 7:17 PM, Benjamin Peterson wrote: > 2011/12/9 Alexander Golec : >> I guess that's bugging me is where does the flow object space live? It is in RPython or the PyPy interpreter? > > The flow space is not RPython. It's in pypy/objspace/flow > >> Also, does RPython exist as a standalone thing, i.e. can I say 'rpython something.py' and get a translation? > > Yes and no. You have to create a target*.py . Look in > pypy/translator/goal for examples. > > > -- > Regards, > Benjamin From benjamin at python.org Sat Dec 10 01:38:57 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 19:38:57 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> <8ACB33EB-395D-4D73-B091-A6E431EBE20E@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > So the RPython compiler reuses interpreter code from PyPy to perform bytecode interpretation on the objects that the were given it by the standard CPython dis module? Namely I'm thinking of the bits that handle branching code, where the interpreter must effectively take both branches at once. Yes. The interpreter pulls the strings of a flow space. > > If I'm not mistaken, the PyPy VM uses a superset of the CPython bytecodes. Does that mean that RPython can perform translation either on bytecodes emitted by CPython or by PyPy? It's not a superset; it's a slight variant. But, yes, that's what happens when translating on CPython in fact. -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sat Dec 10 01:46:03 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 19:46:03 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> <8ACB33EB-395D-4D73-B091-A6E431EBE20E@gmail.com> Message-ID: <45748374-635E-45F9-814E-1796961C7AB9@gmail.com> I think it's coming together now, then. So then the code for object spaces are shared between PyPy and the RPython compiler, as is the code for abstract interpretation? The thing that triggered this was me wondering what does the interpretation of the bytecodes, but I'm guessing its the shared interpretation code? Alex On Dec 9, 2011, at 7:38 PM, Benjamin Peterson wrote: > 2011/12/9 Alexander Golec : >> So the RPython compiler reuses interpreter code from PyPy to perform bytecode interpretation on the objects that the were given it by the standard CPython dis module? Namely I'm thinking of the bits that handle branching code, where the interpreter must effectively take both branches at once. > > Yes. The interpreter pulls the strings of a flow space. > >> >> If I'm not mistaken, the PyPy VM uses a superset of the CPython bytecodes. Does that mean that RPython can perform translation either on bytecodes emitted by CPython or by PyPy? > > It's not a superset; it's a slight variant. But, yes, that's what > happens when translating on CPython in fact. > > > > -- > Regards, > Benjamin From benjamin at python.org Sat Dec 10 02:12:56 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 20:12:56 -0500 Subject: [pypy-dev] Could someone give me an idea of what is where with RPython? In-Reply-To: <45748374-635E-45F9-814E-1796961C7AB9@gmail.com> References: <2A0BB6EE-EAC6-453A-862A-032FE00BFCC5@gmail.com> <8ACB33EB-395D-4D73-B091-A6E431EBE20E@gmail.com> <45748374-635E-45F9-814E-1796961C7AB9@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > I think it's coming together now, then. So then the code for object spaces are shared between PyPy and the RPython compiler, as is the code for abstract interpretation? The thing that triggered this was me wondering what does the interpretation of the bytecodes, but I'm guessing its the shared interpretation code? Yep, pypy/interpreter/ is the pypy python interpreter with the std objspace and performs abstract interpretation with the flow space. -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sat Dec 10 02:16:09 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 20:16:09 -0500 Subject: [pypy-dev] Concerning bounded number of classes created Message-ID: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> Hey all, Sorry to keep asking questions on the mailing list, but I've been doing my semester research project on PyPy, and my deadline is beginning to loom... I have a question concerning what I think is a conflict between the RPython coding guide and the 2005 EU paper. The paper states: Our approach goes further and analyses live programs in memory: the program is allowed to contain fully dynamic sections, as long as these sections are entered a bounded num- ber of times. For example, the source code of the PyPy interpreter, which is itself written in this bounded-dynamism style, makes extensive use of the fact that it is possible to build new classes at any point in time -- not just during an initialisation phase -- as long as the number of new classes is bounded. For example, interpreter/gateway.py builds a custom wrapper class corresponding to each function that a particular variable can reference. There is a finite num- ber of functions in total, so this can only create a finite number of extra wrapper classes. While the coding guide says: definitions run-time definition of classes or functions is not allowed. I'm I understood the paper correctly, any piece of code can create zero or one class, but not infinity. However, this doesn't seem to jive with what the coding guide says. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sat Dec 10 02:18:53 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 20:18:53 -0500 Subject: [pypy-dev] Concerning bounded number of classes created In-Reply-To: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> References: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > Hey all, > > Sorry to keep asking questions on the mailing list, but I've been doing my > semester research project on PyPy, and my deadline is beginning to loom... > > I have a question concerning what I think is a conflict between the RPython > coding guide and the 2005 EU paper. The paper states: > > Our approach goes further and analyses live programs in memory: the program > is allowed to contain fully dynamic sections, as long as these sections are > entered a bounded num- ber of times. For example, the source code of the > PyPy interpreter, which is itself written in this bounded-dynamism style, > makes extensive use of the fact that it is possible to build new classes at > any point in time -- not just during an initialisation phase -- as long as > the number of new classes is bounded. For example, interpreter/gateway.py > builds a custom wrapper class corresponding to each function that a > particular variable can reference. There is a finite num- ber of functions > in total, so this can only create a finite number of extra wrapper classes. I'm not actually sure what this is referring to; it may have changed since 2005. > > While the coding guide says: > > > definitions > > run-time definition of classes or functions is not allowed. This is the truest statement currently. -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sat Dec 10 02:21:27 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 20:21:27 -0500 Subject: [pypy-dev] Concerning bounded number of classes created In-Reply-To: References: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> Message-ID: So at no point reachable from the entry_point (or is it run?) of the program can you create classes and functions? Alex On Dec 9, 2011, at 8:18 PM, Benjamin Peterson wrote: > 2011/12/9 Alexander Golec : >> Hey all, >> >> Sorry to keep asking questions on the mailing list, but I've been doing my >> semester research project on PyPy, and my deadline is beginning to loom... >> >> I have a question concerning what I think is a conflict between the RPython >> coding guide and the 2005 EU paper. The paper states: >> >> Our approach goes further and analyses live programs in memory: the program >> is allowed to contain fully dynamic sections, as long as these sections are >> entered a bounded num- ber of times. For example, the source code of the >> PyPy interpreter, which is itself written in this bounded-dynamism style, >> makes extensive use of the fact that it is possible to build new classes at >> any point in time -- not just during an initialisation phase -- as long as >> the number of new classes is bounded. For example, interpreter/gateway.py >> builds a custom wrapper class corresponding to each function that a >> particular variable can reference. There is a finite num- ber of functions >> in total, so this can only create a finite number of extra wrapper classes. > > I'm not actually sure what this is referring to; it may have changed since 2005. > >> >> While the coding guide says: >> >> >> definitions >> >> run-time definition of classes or functions is not allowed. > > This is the truest statement currently. > > > -- > Regards, > Benjamin From benjamin at python.org Sat Dec 10 02:22:20 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 20:22:20 -0500 Subject: [pypy-dev] Concerning bounded number of classes created In-Reply-To: References: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > So at no point reachable from the entry_point (or is it run?) of the program can you create classes and functions? All the classes that there will ever by instances of in the RPython program are known at translation. -- Regards, Benjamin From alexgolecmailinglists at gmail.com Sat Dec 10 02:41:47 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 20:41:47 -0500 Subject: [pypy-dev] Concerning bounded number of classes created In-Reply-To: References: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> Message-ID: <503EAC32-61EB-42D7-8426-31E05B05A712@gmail.com> Also, if you take a look at gateway.py, you can see that type is being called. Does the interpreter treat class definitions and calls to type differently? https://bitbucket.org/pypy/pypy/src/94737f156c30/pypy/interpreter/gateway.py Alex On Dec 9, 2011, at 8:22 PM, Benjamin Peterson wrote: > 2011/12/9 Alexander Golec : >> So at no point reachable from the entry_point (or is it run?) of the program can you create classes and functions? > > All the classes that there will ever by instances of in the RPython > program are known at translation. > > > > -- > Regards, > Benjamin -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexgolecmailinglists at gmail.com Sat Dec 10 03:21:24 2011 From: alexgolecmailinglists at gmail.com (Alexander Golec) Date: Fri, 9 Dec 2011 21:21:24 -0500 Subject: [pypy-dev] In RPython, what is meant by 'live code objects'? Message-ID: <5FFC852E-C548-440A-9F9B-18095A62D925@gmail.com> Hey all, When the RPython receives the code objects, are all the objects that the interpreter would push and pop on the stack already present? Or does the abstract interpreter handle creating those objects as well? In other words, is interpretation abstract from the start of the program to the end? Is the interpreter concrete until the entry point, at which point it switches to the abstract interpreter? Or is the RPython compiler not responsible for interpreting the initialization portion, just for the bit after the entry point? Alex From benjamin at python.org Sat Dec 10 03:32:52 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 21:32:52 -0500 Subject: [pypy-dev] Concerning bounded number of classes created In-Reply-To: <503EAC32-61EB-42D7-8426-31E05B05A712@gmail.com> References: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> <503EAC32-61EB-42D7-8426-31E05B05A712@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > Also, if you take a look at gateway.py, you can see that type is being > called. Does the interpreter treat class definitions and calls to type > differently? > > https://bitbucket.org/pypy/pypy/src/94737f156c30/pypy/interpreter/gateway.py That happens before the flow space sees it. -- Regards, Benjamin From benjamin at python.org Sat Dec 10 03:33:56 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 21:33:56 -0500 Subject: [pypy-dev] In RPython, what is meant by 'live code objects'? In-Reply-To: <5FFC852E-C548-440A-9F9B-18095A62D925@gmail.com> References: <5FFC852E-C548-440A-9F9B-18095A62D925@gmail.com> Message-ID: 2011/12/9 Alexander Golec : > Hey all, > > When the RPython receives the code objects, are all the objects that the interpreter would push and pop on the stack already present? Or does the abstract interpreter handle creating those objects as well? > > In other words, is interpretation abstract from the start of the program to the end? Is the interpreter concrete until the entry point, at which point it switches to the abstract interpreter? Or is the RPython compiler not responsible for interpreting the initialization portion, just for the bit after the entry point? The latter. -- Regards, Benjamin From rinu.matrix at gmail.com Sat Dec 10 05:01:58 2011 From: rinu.matrix at gmail.com (Rinu Boney) Date: Sat, 10 Dec 2011 09:31:58 +0530 Subject: [pypy-dev] PyPy Tool Chain Message-ID: Can almost any language be implemented in RPython and be faster ? ( like can we implement a langugae like java ? ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sat Dec 10 05:05:59 2011 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Dec 2011 23:05:59 -0500 Subject: [pypy-dev] PyPy Tool Chain In-Reply-To: References: Message-ID: 2011/12/9 Rinu Boney : > Can almost any language be implemented in RPython and be faster ? > ( like can we implement a langugae like java ? ) Presumably, though I think you'd have to work to beat HotSpot. -- Regards, Benjamin From cfbolz at gmx.de Sat Dec 10 09:04:20 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Sat, 10 Dec 2011 09:04:20 +0100 Subject: [pypy-dev] PyPy Tool Chain In-Reply-To: References: Message-ID: <4EE31284.7060402@gmx.de> On 12/10/2011 05:01 AM, Rinu Boney wrote: > Can almost any language be implemented in RPython and be faster ? ( > like can we implement a langugae like java ? ) I would say that the language has to be sort of dynamic for it to make sense. A variety of language *have* been implemented, giving credibility to that claim (barebones JS, Prolog, Scheme). Cheers, Carl Friedrich From arigo at tunes.org Sat Dec 10 10:58:36 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 10 Dec 2011 10:58:36 +0100 Subject: [pypy-dev] Concerning bounded number of classes created In-Reply-To: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> References: <56A275FF-BEC3-4D2A-A363-ED0887B06960@gmail.com> Message-ID: Hi, On Sat, Dec 10, 2011 at 02:16, Alexander Golec wrote: > I have a question concerning what I think is a conflict between the RPython > coding guide and the 2005 EU paper. This is not a conflict. The coding guide says that you cannot create any RPython class at runtime. This is true. A priori you would think that this means that all RPython classes must be created before we translate the program, but not exactly: as the 2005 EU paper says, you can create (a bounded number of) extra RPython classes *during* translation, more precisely during annotation (and not only *before* translation starts), using tricks like specialize:memo functions. Once annotation finishes, no more RPython class can be created. A bient?t, Armin. From arigo at tunes.org Sat Dec 10 11:13:23 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 10 Dec 2011 11:13:23 +0100 Subject: [pypy-dev] PyPy Tool Chain In-Reply-To: <4EE31284.7060402@gmx.de> References: <4EE31284.7060402@gmx.de> Message-ID: Hi, On Sat, Dec 10, 2011 at 09:04, Carl Friedrich Bolz wrote: > I would say that the language has to be sort of dynamic for it to make > sense. A variety of language *have* been implemented, giving credibility > to that claim (barebones JS, Prolog, Scheme). ...you forget Haskell :-) For Java, it would make sense in the sense of quickly testing what kind of results we can get with a tracing JIT. But you will never beat HotSpot, or only on extreme examples; and by now I'm sure that they are various tracing JITs for Java around and well-polished. PyPy is better at handling very complicated languages, like Python. Direct access to frames in the languages and other similar issues are particularly messy when writing the JIT manually --- not to mention the sheer amount of built-in types and dispatch rules of any single operation. I'm not saying that Java is a toy-simple language, but rather that complexity comes from other places, like advanced multithreaded GCs, that are out of the scope of Python and that we did not investigate at all with PyPy. A bient?t, Armin. From holger at merlinux.eu Sat Dec 10 13:14:00 2011 From: holger at merlinux.eu (holger krekel) Date: Sat, 10 Dec 2011 12:14:00 +0000 Subject: [pypy-dev] python 2 and python 3 sharing an interpreter? In-Reply-To: References: Message-ID: <20111210121400.GT27920@merlinux.eu> On Thu, Dec 08, 2011 at 05:35 -0500, Alex Gaynor wrote: > On Thu, Dec 8, 2011 at 5:16 AM, Martijn Faassen wrote: > > Would it be possible to have an interpreter that could support both Python > > 2 and Python 3 modules in the same runtime? I.e, an interpreter that > > supports importing Python 3 code from Python 2 modules, and vice versa? > > > > It would tremendously help the Python ecosystem for such a solution to be > > out there, because in such a situation people can start using Python 3 > > codebases in Python 2, encouraging more people to port their libraries to > > Python 3, and people can use Python 2 codebases in Python 3, encouraging > > more people to start projects in Python 3. I can't stress enough how much I > > believe that would help the Python ecosystem! > > > > I don't think it can, or should be done, here's why: Say you have a Python > 2 dictionary {"a": 3}, and you pass this over to py3k land, what does it > become? Logically it becomes a {b"a": 3}, two problems: a) it now has > bytes for keys, which means you can't use it as a **kwargs, b) internally > ints have a totally different representation, since there's now only a > unified long type (this one doesn't apply to pypy as much, just throwing it > out there for completeness). Python 2 fundamentally does the string type > wrong, it's both the general string type for everythign, as well as the > byte sequence, the result is no program has enough semantic knowledge to > know what you mean by a string when you pass it over hte py3k boundary. Alex, IIRC you were at Quora involved in a project connecting Python2 and pypy with execnet, right? Do you agree that that connecting py3 and py2 should work just as well? In any case, the execnet serializer allows policies for passing strings/unicode between py2 and py3 (and the execnet development branch contains even finer grained possibilities). best, holger From rinu.matrix at gmail.com Sun Dec 11 03:00:20 2011 From: rinu.matrix at gmail.com (Rinu Boney) Date: Sun, 11 Dec 2011 07:30:20 +0530 Subject: [pypy-dev] OS Message-ID: can RPython be converted to low level code as to create something like a kernel or something low level as that ? what about an OS with assembly lang , little C/C++ and RPython ? can an OS like android be developed using RPython ? ( by RPython i also mean using the PyPy tool chain ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Sun Dec 11 07:34:56 2011 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Sun, 11 Dec 2011 17:34:56 +1100 Subject: [pypy-dev] OS In-Reply-To: References: Message-ID: With a little work it would be possible to target ring0 with rpython, but the real question is why you would want to. There are many other languages better suited to the task. On 11/12/2011 1:01 PM, "Rinu Boney" wrote: can RPython be converted to low level code as to create something like a kernel or something low level as that ? what about an OS with assembly lang , little C/C++ and RPython ? can an OS like android be developed using RPython ? ( by RPython i also mean using the PyPy tool chain ) _______________________________________________ pypy-dev mailing list pypy-dev at python.org http://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Dec 11 10:59:58 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 11 Dec 2011 11:59:58 +0200 Subject: [pypy-dev] OS In-Reply-To: References: Message-ID: On Sun, Dec 11, 2011 at 8:34 AM, William ML Leslie wrote: > With a little work it would be possible to target ring0 with rpython, but > the real question is why you would want to. There are many other languages > better suited to the task. > > On 11/12/2011 1:01 PM, "Rinu Boney" wrote: > > can RPython be converted to low level code as to create something like a > kernel or something low level as that ? > what about an OS with assembly lang , little C/C++ and RPython ? > can an OS like android be developed using RPython ? > ( by RPython i also mean using the PyPy tool chain ) I think it would be a cool idea. The main problem comes from a fact that RPython is a garbage collected language. You would need to be extra careful around places which require timely response. Otherwise all necessary things should be there, PyPy integrates nicely with C-level structures. William - what languages are better suited C? Cheers, fijal From arigo at tunes.org Sun Dec 11 11:50:41 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Dec 2011 11:50:41 +0100 Subject: [pypy-dev] OS In-Reply-To: References: Message-ID: Hi Fijal, On Sun, Dec 11, 2011 at 10:59, Maciej Fijalkowski wrote: > William - what languages are better suited C? I agree with William's point of view. Yes, I would imagine that C is a good and well-supported language to do this kind of things. If your goals include "I don't really want a garbage collector" and "I definitely have no use for a JIT generator" and "if only I could just do everything as if it was C", then RPython is a bad target imho --- use C. A bient?t, Armin. From fijall at gmail.com Sun Dec 11 16:04:56 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 11 Dec 2011 17:04:56 +0200 Subject: [pypy-dev] OS In-Reply-To: References: Message-ID: On Sun, Dec 11, 2011 at 12:50 PM, Armin Rigo wrote: > Hi Fijal, > > On Sun, Dec 11, 2011 at 10:59, Maciej Fijalkowski wrote: >> William - what languages are better suited C? > > I agree with William's point of view. ?Yes, I would imagine that C is > a good and well-supported language to do this kind of things. ?If your > goals include "I don't really want a garbage collector" and "I > definitely have no use for a JIT generator" and "if only I could just > do everything as if it was C", then RPython is a bad target imho --- > use C. There was an experimental operating system written in C# (singularity I think), so it really does depend on what you're after. If the goal is "I want a POSIX" then yes, but if the goal is "I want to experiment with an OS written in a high level language", then RPython seems like a better fit. > > > A bient?t, > > Armin. From arigo at tunes.org Sun Dec 11 16:24:05 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Dec 2011 16:24:05 +0100 Subject: [pypy-dev] OS In-Reply-To: References: Message-ID: Hi Maciej, On Sun, Dec 11, 2011 at 16:04, Maciej Fijalkowski wrote: > There was an experimental operating system written in C# (singularity > I think), so it really does depend on what you're after. I know; there are experimental OSes in various languages, including also high-level languages developed precisely for this purpose. My point is that RPython is in neither category: it is not specially developed for OSes, and it is not a well-developed all-purposes language which comes with a good collection of tools e.g. for debugging. A bient?t, Armin. From lac at openend.se Sun Dec 11 17:30:05 2011 From: lac at openend.se (Laura Creighton) Date: Sun, 11 Dec 2011 17:30:05 +0100 Subject: [pypy-dev] OS In-Reply-To: Message from Armin Rigo of "Sun, 11 Dec 2011 16:24:05 +0100." References: Message-ID: <201112111630.pBBGU5iE027755@theraft.openend.se> I worked on the Tunis OS at the University of Toronto, which was an OS written in Concurrent Euclid. I think -- especially if Software Transactional Memory works out, there will be interest in writing an OS, not in RPython, but in Python. So then, well, maybe you will want a JIT ... I think we will only be able to finally ditch unix/linux for _something significantly better_ when we get to write the something better in a higher level language than C. Or Java. Laura From bokr at oz.net Sun Dec 11 17:40:21 2011 From: bokr at oz.net (Bengt Richter) Date: Sun, 11 Dec 2011 17:40:21 +0100 Subject: [pypy-dev] .py -> .pyc -> (.pyc2 + .so) ? Message-ID: Just musing -- if pypy can discover pure functions (or functions with simple static global side effects) and the jit transforms them into machine code in a runtime memory-resident image somewhere, could this image be transformed with some .so boilerplate and metadata into a loadable module which could be discovered for use more or less like a .pyc that corresponds to .py is used? Could the jit perhaps write into a memory-mapped file that could be closed on exit to preserve the image for possible final post-processing into a standard format .so full of ready-made functions and supporting data? If suitable metadata were included, perhaps the .so could be loaded and used by a C or C++ program linking it? Meaning there would be a way to write a C-compatible "library" using pypy and python module source syntax. If you 'can never be sure' some function is suitable, could you pass pypy a support file (e.g. -jit purefuncfile=thesupportfile) with a line per function specifying modulefile and access path (maybe start with global functions only), asserting suitability? Regards, Bengt Richter From benjamin at python.org Sun Dec 11 17:38:50 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 11 Dec 2011 11:38:50 -0500 Subject: [pypy-dev] .py -> .pyc -> (.pyc2 + .so) ? In-Reply-To: References: Message-ID: 2011/12/11 Bengt Richter : > Just musing -- if pypy can discover pure functions (or functions with simple > static > global side effects) and the jit transforms them into machine code in a > runtime memory-resident > image somewhere, could this image be transformed with some .so boilerplate > and metadata > into a loadable module which could be discovered for use more or less like a > .pyc that > corresponds to .py is used? Could the jit perhaps write into a memory-mapped > file that > could be closed on exit to preserve the image for possible final > post-processing into > a standard format .so full of ready-made functions and supporting data? Basically, no. JIT compiled functions have lots of their runtime context (like memory addresses) hardcoded into thme. -- Regards, Benjamin From bokr at oz.net Sun Dec 11 18:13:44 2011 From: bokr at oz.net (Bengt Richter) Date: Sun, 11 Dec 2011 18:13:44 +0100 Subject: [pypy-dev] .py -> .pyc -> (.pyc2 + .so) ? In-Reply-To: References: Message-ID: On 12/11/2011 05:38 PM Benjamin Peterson wrote: > 2011/12/11 Bengt Richter: >> Just musing -- if pypy can discover pure functions (or functions with simple >> static >> global side effects) and the jit transforms them into machine code in a >> runtime memory-resident >> image somewhere, could this image be transformed with some .so boilerplate >> and metadata >> into a loadable module which could be discovered for use more or less like a >> .pyc that >> corresponds to .py is used? Could the jit perhaps write into a memory-mapped >> file that >> could be closed on exit to preserve the image for possible final >> post-processing into >> a standard format .so full of ready-made functions and supporting data? > > Basically, no. JIT compiled functions have lots of their runtime > context (like memory addresses) hardcoded into thme. > > How many address spaces are used, and what would it take to emit (optionally of course) which segments stuff belonged to, like .text and .data etc, so an ELF32 relocatable loadable could be constructed from the image + extra emitted data? Would it be really hard to direct this special jit output to its own run time segments for easier capture, even if at the cost of indirections from the really dynamic contexts? (my vague suppositions involved here ;-) How about just for designated modules, giving them basic .text, .data, and? Regards, Bengt Richter From arigo at tunes.org Sun Dec 11 18:32:38 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 11 Dec 2011 18:32:38 +0100 Subject: [pypy-dev] .py -> .pyc -> (.pyc2 + .so) ? In-Reply-To: References: Message-ID: Hi Bengt, On Sun, Dec 11, 2011 at 18:13, Bengt Richter wrote: >> Basically, no. JIT compiled functions have lots of their runtime >> context (like memory addresses) hardcoded into thme. > > How many address spaces are used, (...) The JIT generates machine code containing a large number of constant addresses --- constant at the time the machine code is written. The vast majority is probably not at all constants that you find in the executable, with a nice link name. E.g. the addresses of Python classes are used all the time, but Python classes don't come statically from the executable; they are created anew every time you restart your program. This makes saving and reloading machine code completely impossible without some very advanced way of mapping addresses in the old (now-dead) process to addresses in the new process, including checking that all the previous assumptions about the (now-dead) object are still true about the new object. (Added as a FAQ entry :-) A bient?t, Armin. From estama at gmail.com Sun Dec 11 21:28:29 2011 From: estama at gmail.com (Elefterios Stamatogiannakis) Date: Sun, 11 Dec 2011 22:28:29 +0200 Subject: [pypy-dev] ctypes rawffi and ffi In-Reply-To: References: Message-ID: <4EE5126D.4020402@gmail.com> I'm exploring pypy's code so as to speed up callbacks from C code, so as to speed up sqlite module's UDF. I have some questions: - What are the differences between ctypes, rawffi and ffi. Where should each one of them be used? - I see that ctypes is build on top of rawffi and ffi. If one wishes to work around ctypes (so as to not have ctype's overhead) which of the rawffi or ffi should he use? Which of the two is faster at runtime? - How can i create a null pointer with _ffi? And some remarks: By only modifying pypy's sqlite module code, i managed to speed up sqlite's callbacks by 30% (for example there is a "for i in range(nargs)" line in _sqlite3. _convert_params, which is a hot path). Also the following line in _ctypes/function.py ._wrap_callable args = [argtype._CData_retval(argtype.from_address(arg)._buffer) for argtype, arg in zip(argtypes, args)] Occupies a large percentage of the overall callback time (around 60-70%). Assuming that pypy JITs all of the above callback code. Is it a problem having all these memory allocations for each callback (my test does 10M callbacks)? Is there a way to avoid as much as possible all these memory allocations. Right now CPython runs my test (10M callbacks) in 1.2 sec and pypy needs from 9 to 14 secs. I suspect that the large spread of pypy's run times are due to GC. Thank you in advance for your answers. lefteris. From william.leslie.ttg at gmail.com Sun Dec 11 23:11:02 2011 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Mon, 12 Dec 2011 09:11:02 +1100 Subject: [pypy-dev] OS In-Reply-To: References: Message-ID: On 11/12/2011, Rinu Boney wrote: > which are the languages suited for it other than c/c++ ? Of the safe languages that I know have been used for operating systems, there have been C# (Singularity, Windows 8?), Java (JNode), and Haskell (fillet-o-fish); but there are languages that are perhaps better suited to it, such as cyclone, bitc, and ATS. Occasionally you need to be able to specify the layout of structures in memory to interact with hardware, and it is particularly useful to have safe handling of unions with custom descriminators. BitC in particular was designed explicitly for this purpose (writing a kernel with verified memory safety, among other things), although I don't know if anyone has used it for that purpose yet. Think too that when the OS starts, there isn't even a heap yet, and you don't know what space you can use to prepare one. > wouldn't it be fun and readable and 'fast development' if written in python > ? There's very little a kernel has to do, and what it must do, it must normally do with particular time bounds. Implementing your kernel in a language that relies on GC means you must be careful about how much allocation can occur on each system call. > can't we have a single language suited for everything ? I can't really answer that one. Part of me hopes so. But the thought of writing a kernel in rpython is not a pleasant one for me. > cant we write it in python to quickly enter the market ? Well, you don't spend much time writing a kernel anyway - use an existing kernel and then run python in userspace. It's pretty unusual to need your code colocated with the kernel, but it would be easier to do with a runtime like pypy (because of memory safety, you can guarantee that code will never do anything that traps). I don't know what you would do about interrupt handlers and GC. > doesn't pypy convert the python code to optimised c code ( am i > wrong ? ) It does, yes. But there is much work to do to remove the dependence on ANSI C functionality; things like memory mapping and allocation, file descriptors, etc. -- William Leslie From pypy at pocketnix.org Mon Dec 12 01:26:35 2011 From: pypy at pocketnix.org (Da_Blitz) Date: Mon, 12 Dec 2011 00:26:35 +0000 Subject: [pypy-dev] OS In-Reply-To: References: Message-ID: <20111212002635.GA32313@pocketnix.org> On Mon, Dec 12, 2011 at 09:11:02AM +1100, William ML Leslie wrote: > Well, you don't spend much time writing a kernel anyway - use an > existing kernel and then run python in userspace. It's pretty unusual > to need your code colocated with the kernel, but it would be easier to > do with a runtime like pypy (because of memory safety, you can > guarantee that code will never do anything that traps). I don't know > what you would do about interrupt handlers and GC. This is somthing i have been playing with a tiny bit and seems to work well. i have a bunch of libs to interface directly to the linux syscalls, various c functions to specific libs that are useful and others to pull info out of /sys and /proc. it use the containerization/namespace stuff (same stuff LXC uses) in linux to isolate python in its own namespace. my intent is to make a primarily python userspace for my own personal usage and experiment with some new ideas as it is a namespace, i can still use the traditinol linux userspace to mange the hardware and for remote access/unbreaking things so esentially you get a managment domain and multiple self contained python based enviroments resource managment is done via cgroups instead of relying on pypy's memory limiting features due to the extra flexibility you gain (eg hieracal limitaiton on memory and swap usage, the ability to change allocations on the fly) From fijall at gmail.com Tue Dec 13 10:03:40 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 11:03:40 +0200 Subject: [pypy-dev] GZip benchmark Message-ID: Hello There is an open pull request with gzip benchmark. Do we want a gzip benchmark in our nightly run? https://bitbucket.org/pypy/benchmarks/pull-request/1/added-gzip-benchmark How much of stdlib we really want to measure? Cheers, fijal From alex.gaynor at gmail.com Tue Dec 13 10:41:29 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 13 Dec 2011 04:41:29 -0500 Subject: [pypy-dev] GZip benchmark In-Reply-To: References: Message-ID: On Tue, Dec 13, 2011 at 4:03 AM, Maciej Fijalkowski wrote: > Hello > > There is an open pull request with gzip benchmark. Do we want a gzip > benchmark in our nightly run? > > https://bitbucket.org/pypy/benchmarks/pull-request/1/added-gzip-benchmark > > How much of stdlib we really want to measure? > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Without looking at this specific benchmark, yes we should have a gzip benchmark, especially since it used to be slow (maybe still is?). Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Dec 13 10:47:39 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 11:47:39 +0200 Subject: [pypy-dev] GZip benchmark In-Reply-To: References: Message-ID: On Tue, Dec 13, 2011 at 11:41 AM, Alex Gaynor wrote: > > > On Tue, Dec 13, 2011 at 4:03 AM, Maciej Fijalkowski > wrote: >> >> Hello >> >> There is an open pull request with gzip benchmark. Do we want a gzip >> benchmark in our nightly run? >> >> https://bitbucket.org/pypy/benchmarks/pull-request/1/added-gzip-benchmark >> >> How much of stdlib we really want to measure? >> >> Cheers, >> fijal >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev > > > Without looking at this specific benchmark, yes we should have a gzip > benchmark, especially since it used to be slow (maybe still is?). > > Alex > If so, feel like looking there and potentially merging? From alex.gaynor at gmail.com Tue Dec 13 10:55:52 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 13 Dec 2011 04:55:52 -0500 Subject: [pypy-dev] GZip benchmark In-Reply-To: References: Message-ID: On Tue, Dec 13, 2011 at 4:47 AM, Maciej Fijalkowski wrote: > On Tue, Dec 13, 2011 at 11:41 AM, Alex Gaynor > wrote: > > > > > > On Tue, Dec 13, 2011 at 4:03 AM, Maciej Fijalkowski > > wrote: > >> > >> Hello > >> > >> There is an open pull request with gzip benchmark. Do we want a gzip > >> benchmark in our nightly run? > >> > >> > https://bitbucket.org/pypy/benchmarks/pull-request/1/added-gzip-benchmark > >> > >> How much of stdlib we really want to measure? > >> > >> Cheers, > >> fijal > >> _______________________________________________ > >> pypy-dev mailing list > >> pypy-dev at python.org > >> http://mail.python.org/mailman/listinfo/pypy-dev > > > > > > Without looking at this specific benchmark, yes we should have a gzip > > benchmark, especially since it used to be slow (maybe still is?). > > > > Alex > > > > If so, feel like looking there and potentially merging? > It looks ok to me, if I merge does anyhting need to be done on the server or with codespeed? Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Dec 13 11:06:49 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 12:06:49 +0200 Subject: [pypy-dev] GZip benchmark In-Reply-To: References: Message-ID: On Tue, Dec 13, 2011 at 11:55 AM, Alex Gaynor wrote: > > > On Tue, Dec 13, 2011 at 4:47 AM, Maciej Fijalkowski > wrote: >> >> On Tue, Dec 13, 2011 at 11:41 AM, Alex Gaynor >> wrote: >> > >> > >> > On Tue, Dec 13, 2011 at 4:03 AM, Maciej Fijalkowski >> > wrote: >> >> >> >> Hello >> >> >> >> There is an open pull request with gzip benchmark. Do we want a gzip >> >> benchmark in our nightly run? >> >> >> >> >> >> https://bitbucket.org/pypy/benchmarks/pull-request/1/added-gzip-benchmark >> >> >> >> How much of stdlib we really want to measure? >> >> >> >> Cheers, >> >> fijal >> >> _______________________________________________ >> >> pypy-dev mailing list >> >> pypy-dev at python.org >> >> http://mail.python.org/mailman/listinfo/pypy-dev >> > >> > >> > Without looking at this specific benchmark, yes we should have a gzip >> > benchmark, especially since it used to be slow (maybe still is?). >> > >> > Alex >> > >> >> If so, feel like looking there and potentially merging? > > > It looks ok to me, if I merge does anyhting need to be done on the server or > with codespeed? > > Alex CPython probably has to be rerun, can be done later though > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > From anto.cuni at gmail.com Tue Dec 13 11:10:31 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 13 Dec 2011 11:10:31 +0100 Subject: [pypy-dev] ctypes rawffi and ffi In-Reply-To: <4EE5126D.4020402@gmail.com> References: <4EE5126D.4020402@gmail.com> Message-ID: <4EE72497.4030107@gmail.com> Hello Elefterios, On 12/11/2011 09:28 PM, Elefterios Stamatogiannakis wrote: > I'm exploring pypy's code so as to speed up callbacks from C code, so as to > speed up sqlite module's UDF. > > I have some questions: > > - What are the differences between ctypes, rawffi and ffi. Where should each > one of them be used? _rawffi and _ffi are pypy-specific modules which expose the functionalities of libffi. _rawffi is old and slow, while _ffi is new and designed to be JIT friendly. However, at the moment of writing not all features of _rawffi have been ported to _ffi yet, that's why we need to keep both around. ctypes is implemented on top of _rawffi/_ffi. The plan for the future is to kill _rawffi at some point. > - I see that ctypes is build on top of rawffi and ffi. If one wishes to work > around ctypes (so as to not have ctype's overhead) which of the rawffi or ffi > should he use? Which of the two is faster at runtime? if possible, you should use _ffi. Note that so far with _ffi you can only call functions, but e.g. you cannot define a callback. If you are interested in this stuff, you might want to look at the ffistruct branch, which adds support for jit-friendly structures to _ffi. Note that the public interface of _ffi is still fluid, it might change in the future. E.g., right now pointers are represented just by using python longs, but we might want to use a proper well-typed wrapper in the future. > - How can i create a null pointer with _ffi? As I said above, right now pointers are passed around as Python longs, so you can just use 0 for the null pointer. > And some remarks: > > By only modifying pypy's sqlite module code, i managed to speed up sqlite's > callbacks by 30% (for example there is a "for i in range(nargs)" line in > _sqlite3. _convert_params, which is a hot path). that's nice. Patches are welcome :-) > Also the following line in _ctypes/function.py ._wrap_callable > > args = [argtype._CData_retval(argtype.from_address(arg)._buffer) > for argtype, arg in zip(argtypes, args)] > > Occupies a large percentage of the overall callback time (around 60-70%). yes, I think that we never looked at performance of ctypes callback. Good spot :-). In other parts of ctypes there are hacks and shortcuts for performances. E.g., in _wrap_result we check whether the result type is primitive, and in that case we just avoid to call _CData_retval. Maybe it's possible to introduce a similar shortcut there. > Assuming that pypy JITs all of the above callback code. Is it a problem having > all these memory allocations for each callback (my test does 10M callbacks)? > Is there a way to avoid as much as possible all these memory allocations. > > Right now CPython runs my test (10M callbacks) in 1.2 sec and pypy needs from > 9 to 14 secs. I suspect that the large spread of pypy's run times are due to GC. I think it's "simply" because we never optimized callbacks. When I ported ctypes from _rawffi to _ffi I got speedups up to 100 times faster. In case of callbacks I expect a minor gain, because the JIT cannot inline across them, but I still think there is room for lots of improvements. If you are interested in trying it, I'll be more than glad to help you :) ciao, Anto From estama at gmail.com Tue Dec 13 12:39:53 2011 From: estama at gmail.com (Eleytherios Stamatogiannakis) Date: Tue, 13 Dec 2011 13:39:53 +0200 Subject: [pypy-dev] ctypes rawffi and ffi In-Reply-To: <4EE72497.4030107@gmail.com> References: <4EE5126D.4020402@gmail.com> <4EE72497.4030107@gmail.com> Message-ID: <4EE73989.7080003@gmail.com> First of all many many thanks Antonio for all this information. It helps me a lot. For example for some time i was struggling trying to find how to do callbacks in ffi :-) . Concerning ctypes. Since my last email i found two points where ctypes can be easily made faster. The first is all the "zip" functions that happen inside ctypes. By changing them to itertools.izip, a small speed up can be had (sorry for not having more concrete speed up numbers but i didn't write down the speed differences). The second is in _ctypes.basics: I think that the calculation "sys.maxint * 2 + 1" in cdata_from_address is recalculated again and again. By precalculating this calculation a sizable speed up can also be had. Nevertheless, the largest speed up (~7x-10x) i managed to do, was when i mostly bypassed ctypes and used rawffi and ffi directly. Pypy's regular sqlite3 module runs attached (sqlitepypy.py) test in 7-10 secs (with the speed up ctypes). Also, i think that the variability of the times is due to GC. Changing sqlitepypy.py to use (attached) msqlite3 instead of sqlite3 module, the same test needs 1.5 sec. Regular CPython needs ~500msec . I would be glad if you took a look at the changed msqlite3 (see Connection.create_function), to comment about the changes. I have to say that i like rawffi and ffi a lot more than ctypes. ffi's simplicity especially is very welcoming. Ctypes seem to me to be very overengineered. I would very much prefer an API with which i could simply acquire the value of a C type, or directly dereference a pointer (it took me some time to find that rawffi.Arrays are ideal for this), than all this "wrapping" around that happens with ctypes. Also i would prefer an API where calls and callbacks aren't wrapped. I had to do a hack in msqlite3 to disable ctype's callback wrapping. After all of the above tests, i believe that right now it isn't possible to achieve the same callback speed as regular CPython with pypy's current infrastructure. So when will the ffistruct branch be integrated into pypy? I would like to run some tests on it :-) . l. On 13/12/11 12:10, Antonio Cuni wrote: > Hello Elefterios, > > On 12/11/2011 09:28 PM, Elefterios Stamatogiannakis wrote: >> I'm exploring pypy's code so as to speed up callbacks from C code, so as to >> speed up sqlite module's UDF. >> >> I have some questions: >> >> - What are the differences between ctypes, rawffi and ffi. Where should each >> one of them be used? > > _rawffi and _ffi are pypy-specific modules which expose the functionalities of > libffi. _rawffi is old and slow, while _ffi is new and designed to be JIT > friendly. However, at the moment of writing not all features of _rawffi have > been ported to _ffi yet, that's why we need to keep both around. > > ctypes is implemented on top of _rawffi/_ffi. The plan for the future is to > kill _rawffi at some point. > >> - I see that ctypes is build on top of rawffi and ffi. If one wishes to work >> around ctypes (so as to not have ctype's overhead) which of the rawffi or ffi >> should he use? Which of the two is faster at runtime? > > if possible, you should use _ffi. Note that so far with _ffi you can only call > functions, but e.g. you cannot define a callback. If you are interested in > this stuff, you might want to look at the ffistruct branch, which adds support > for jit-friendly structures to _ffi. > > Note that the public interface of _ffi is still fluid, it might change in the > future. E.g., right now pointers are represented just by using python longs, > but we might want to use a proper well-typed wrapper in the future. > >> - How can i create a null pointer with _ffi? > > As I said above, right now pointers are passed around as Python longs, so you > can just use 0 for the null pointer. > >> And some remarks: >> >> By only modifying pypy's sqlite module code, i managed to speed up sqlite's >> callbacks by 30% (for example there is a "for i in range(nargs)" line in >> _sqlite3. _convert_params, which is a hot path). > > that's nice. Patches are welcome :-) > >> Also the following line in _ctypes/function.py ._wrap_callable >> >> args = [argtype._CData_retval(argtype.from_address(arg)._buffer) >> for argtype, arg in zip(argtypes, args)] >> >> Occupies a large percentage of the overall callback time (around 60-70%). > > yes, I think that we never looked at performance of ctypes callback. Good spot > :-). > > In other parts of ctypes there are hacks and shortcuts for performances. E.g., > in _wrap_result we check whether the result type is primitive, and in that > case we just avoid to call _CData_retval. Maybe it's possible to introduce a > similar shortcut there. > >> Assuming that pypy JITs all of the above callback code. Is it a problem having >> all these memory allocations for each callback (my test does 10M callbacks)? >> Is there a way to avoid as much as possible all these memory allocations. >> >> Right now CPython runs my test (10M callbacks) in 1.2 sec and pypy needs from >> 9 to 14 secs. I suspect that the large spread of pypy's run times are due to GC. > > I think it's "simply" because we never optimized callbacks. When I ported > ctypes from _rawffi to _ffi I got speedups up to 100 times faster. In case of > callbacks I expect a minor gain, because the JIT cannot inline across them, > but I still think there is room for lots of improvements. > > If you are interested in trying it, I'll be more than glad to help you :) > > ciao, > Anto -------------- next part -------------- A non-text attachment was scrubbed... Name: sqlitetest.zip Type: application/zip Size: 10527 bytes Desc: not available URL: From tbaldridge at gmail.com Tue Dec 13 15:41:33 2011 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Tue, 13 Dec 2011 08:41:33 -0600 Subject: [pypy-dev] Non standard ASTs Message-ID: For some time now, I've been working on my Clojure/pypy implementation. However, I'm starting to wish I had a few Clojure facilities while writing my rpython clojure interpreter. Basically I would love to have macros, protocols, symbol resolution, and a ton of other little features for helping with run-time code generation and the like. So this is when I started thinking about generating python code from Clojure source. That is, in Python we have a AST, and in Clojure we can parse LISP lists and convert them to AST constructs. This is when I started to realize where Python falls short of the flexability of Clojure in a few key areas: 1) multiline closures (lambdas with more than one statement)2) IfExpr with multiline bodies. I tried working with the ast module in Python 2.7...and it's horribly restricted. I found a old lib from PEAK that does what I want (BytecodeAssembler) but it's fairly old, and no longer supported. So this is when I thought of the ast in /pypy/interpreter/astcompiler/ Does the pypy interpreter support non-standard constructs like IfExpr with a multiline body? And if not, could it be adapted to support this? I'd rather not drop all the way to emitting Python bytecode...but I may have to. My end goal is to have my target load some rClojure (as I call it) code. And then have pypy translate this rClojure code to a native interpreter. Yes, we're talking about adding a new front-end to pypy. Timothy -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From benjamin at python.org Tue Dec 13 15:49:50 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 13 Dec 2011 09:49:50 -0500 Subject: [pypy-dev] Non standard ASTs In-Reply-To: References: Message-ID: 2011/12/13 Timothy Baldridge : > For some time now, I've been working on my Clojure/pypy > implementation. However, I'm starting to wish I had a few Clojure > facilities while writing my rpython clojure interpreter. Basically I > would love to have macros, protocols, symbol resolution, and a ton of > other little features for helping with run-time code generation and > the like. > So this is when I started thinking about generating python code from > Clojure source. That is, in Python we have a AST, and in Clojure we > can parse LISP lists and convert them to AST constructs. This is when > I started to realize where Python falls short of the flexability of > Clojure in a few key areas: > 1) multiline closures (lambdas with more than one statement)2) IfExpr > with multiline bodies. > I tried working with the ast module in Python 2.7...and it's horribly > restricted. I found a old lib from PEAK that does what I want > (BytecodeAssembler) but it's fairly old, and no longer supported. So > this is when I thought of the ast in /pypy/interpreter/astcompiler/ > Does the pypy interpreter support non-standard constructs like IfExpr > with a multiline body? And if not, could it be adapted to support > this? No and naturally yes. > I'd rather not drop all the way to emitting Python bytecode... Uh, so what are you doing with the Python AST then? -- Regards, Benjamin From fijall at gmail.com Tue Dec 13 16:03:00 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 17:03:00 +0200 Subject: [pypy-dev] Non standard ASTs In-Reply-To: References: Message-ID: On Tue, Dec 13, 2011 at 4:41 PM, Timothy Baldridge wrote: > For some time now, I've been working on my Clojure/pypy > implementation. However, I'm starting to wish I had a few Clojure > facilities while writing my rpython clojure interpreter. Basically I > would love to have macros, protocols, symbol resolution, and a ton of > other little features for helping with run-time code generation and > the like. > So this is when I started thinking about generating python code from > Clojure source. That is, in Python we have a AST, and in Clojure we > can parse LISP lists and convert them to AST constructs. This is when > I started to realize where Python falls short of the flexability of > Clojure in a few key areas: > 1) multiline closures (lambdas with more than one statement)2) IfExpr > with multiline bodies. > I tried working with the ast module in Python 2.7...and it's horribly > restricted. I found a old lib from PEAK that does what I want > (BytecodeAssembler) but it's fairly old, and no longer supported. So > this is when I thought of the ast in /pypy/interpreter/astcompiler/ > Does the pypy interpreter support non-standard constructs like IfExpr > with a multiline body? And if not, could it be adapted to support > this? > I'd rather not drop all the way to emitting Python bytecode...but I > may have to. My end goal is to have my target load some rClojure (as I > call it) code. And then have pypy translate this rClojure code to a > native interpreter. Yes, we're talking about adding a new front-end to > pypy. > Timothy > -- > ?One of the main causes of the fall of the Roman Empire was > that?lacking zero?they had no way to indicate successful termination > of their C programs.? > (Robert Firth) > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev RPython works based on the python bytecode, so you're free to emit python bytecode from whatever suits you. However, you're very much on your own, since this is never going to be incorporated in PyPy. I would strongly discourage you from dealing with AST - but that might be just a personal dislike of AST manipulation as a mean of mapping concepts from different languages. Cheers, fijal From arigo at tunes.org Tue Dec 13 16:06:41 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 13 Dec 2011 16:06:41 +0100 Subject: [pypy-dev] Non standard ASTs In-Reply-To: References: Message-ID: Hi, On Tue, Dec 13, 2011 at 15:41, Timothy Baldridge wrote: > 1) multiline closures (lambdas with more than one statement) > 2) IfExpr with multiline bodies. It just shows that you're trying to stuff too much things into Python's limited expressions. You need instead to produce complete statements, with extra temporary variables. For example: turn code like "if x then a; b; else c" (random syntax example) into: if cond: a result = b else: result = c and then use 'result'. Yes, indeed, Python is not the best language for generating source or AST directly for, but it can be done. A bient?t, Armin. From tbaldridge at gmail.com Tue Dec 13 16:29:17 2011 From: tbaldridge at gmail.com (Timothy Baldridge) Date: Tue, 13 Dec 2011 09:29:17 -0600 Subject: [pypy-dev] Non standard ASTs In-Reply-To: References: Message-ID: One last question then. What bytecodes in Python are strictly unsupported in rPython. I assume YIELD is the major one...are there any others? Timothy -- ?One of the main causes of the fall of the Roman Empire was that?lacking zero?they had no way to indicate successful termination of their C programs.? (Robert Firth) From fijall at gmail.com Tue Dec 13 17:14:38 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 18:14:38 +0200 Subject: [pypy-dev] Free BSD pypy port Message-ID: Hi David, CC pypy-dev Can you explain the choice of options here: http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev=1.1;content-type=text%2Fplain;only_with_tag=HEAD Especially, the objspace, gc and gcrootfinder choices. Cheers, fijal From fijall at gmail.com Tue Dec 13 17:16:26 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 18:16:26 +0200 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: References: Message-ID: On Tue, Dec 13, 2011 at 6:14 PM, Maciej Fijalkowski wrote: > Hi David, CC pypy-dev > > Can you explain the choice of options here: > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev=1.1;content-type=text%2Fplain;only_with_tag=HEAD > > Especially, the objspace, gc and gcrootfinder choices. > > Cheers, > fijal I guess I simply mistaken examples for real code, sorry Cheers, fijal From naylor.b.david at gmail.com Tue Dec 13 17:55:07 2011 From: naylor.b.david at gmail.com (David Naylor) Date: Tue, 13 Dec 2011 18:55:07 +0200 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: References: Message-ID: <201112131855.11038.naylor.b.david@gmail.com> On Tuesday, 13 December 2011 18:16:26 Maciej Fijalkowski wrote: > On Tue, Dec 13, 2011 at 6:14 PM, Maciej Fijalkowski wrote: > > Hi David, CC pypy-dev > > > > Can you explain the choice of options here: > > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev=1.1;co > > ntent-type=text%2Fplain;only_with_tag=HEAD > > > > Especially, the objspace, gc and gcrootfinder choices. > > > > Cheers, > > fijal Hi > I guess I simply mistaken examples for real code, sorry For reference, the predefined options are at: http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/files/bsd.pypy.inst.mk?rev=1.1 Currently there are the default, sandbox and CLI predefined options. The CLI currently is not supported (WIP). Regards -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From fijall at gmail.com Tue Dec 13 18:03:09 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 19:03:09 +0200 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: <201112131855.11038.naylor.b.david@gmail.com> References: <201112131855.11038.naylor.b.david@gmail.com> Message-ID: On Tue, Dec 13, 2011 at 6:55 PM, David Naylor wrote: > On Tuesday, 13 December 2011 18:16:26 Maciej Fijalkowski wrote: >> On Tue, Dec 13, 2011 at 6:14 PM, Maciej Fijalkowski > wrote: >> > Hi David, CC pypy-dev >> > >> > Can you explain the choice of options here: >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev=1.1;co >> > ntent-type=text%2Fplain;only_with_tag=HEAD >> > >> > Especially, the objspace, gc and gcrootfinder choices. >> > >> > Cheers, >> > fijal > > Hi > >> I guess I simply mistaken examples for real code, sorry > > For reference, the predefined options are at: > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/files/bsd.pypy.inst.mk?rev=1.1 > > Currently there are the default, sandbox and CLI predefined options. ?The CLI > currently is not supported (WIP). > > Regards Cool, that all sounds good, --thread is on by default you don't have to specify it (it also does not hurt). From naylor.b.david at gmail.com Tue Dec 13 18:24:50 2011 From: naylor.b.david at gmail.com (David Naylor) Date: Tue, 13 Dec 2011 19:24:50 +0200 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: References: <201112131855.11038.naylor.b.david@gmail.com> Message-ID: <201112131924.53737.naylor.b.david@gmail.com> On Tuesday, 13 December 2011 19:03:09 Maciej Fijalkowski wrote: > On Tue, Dec 13, 2011 at 6:55 PM, David Naylor wrote: > > On Tuesday, 13 December 2011 18:16:26 Maciej Fijalkowski wrote: > >> On Tue, Dec 13, 2011 at 6:14 PM, Maciej Fijalkowski > > > > wrote: > >> > Hi David, CC pypy-dev > >> > > >> > Can you explain the choice of options here: > >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev=1.1 > >> > ;co ntent-type=text%2Fplain;only_with_tag=HEAD > >> > > >> > Especially, the objspace, gc and gcrootfinder choices. > >> > > >> > Cheers, > >> > fijal > > > > Hi > > > >> I guess I simply mistaken examples for real code, sorry > > > > For reference, the predefined options are at: > > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/files/bsd.pypy.inst > > .mk?rev=1.1 > > > > Currently there are the default, sandbox and CLI predefined options. The > > CLI currently is not supported (WIP). > > > > Regards > > Cool, that all sounds good, --thread is on by default you don't have > to specify it (it also does not hurt). Thanks, I'll remove --thread. I prefer to keep the default implicit (avoids having to track them). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From fijall at gmail.com Tue Dec 13 18:28:07 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 19:28:07 +0200 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: <201112131924.53737.naylor.b.david@gmail.com> References: <201112131855.11038.naylor.b.david@gmail.com> <201112131924.53737.naylor.b.david@gmail.com> Message-ID: On Tue, Dec 13, 2011 at 7:24 PM, David Naylor wrote: > On Tuesday, 13 December 2011 19:03:09 Maciej Fijalkowski wrote: >> On Tue, Dec 13, 2011 at 6:55 PM, David Naylor > wrote: >> > On Tuesday, 13 December 2011 18:16:26 Maciej Fijalkowski wrote: >> >> On Tue, Dec 13, 2011 at 6:14 PM, Maciej Fijalkowski >> > >> > wrote: >> >> > Hi David, CC pypy-dev >> >> > >> >> > Can you explain the choice of options here: >> >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev=1.1 >> >> > ;co ntent-type=text%2Fplain;only_with_tag=HEAD >> >> > >> >> > Especially, the objspace, gc and gcrootfinder choices. >> >> > >> >> > Cheers, >> >> > fijal >> > >> > Hi >> > >> >> I guess I simply mistaken examples for real code, sorry >> > >> > For reference, the predefined options are at: >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/files/bsd.pypy.inst >> > .mk?rev=1.1 >> > >> > Currently there are the default, sandbox and CLI predefined options. ?The >> > CLI currently is not supported (WIP). >> > >> > Regards >> >> Cool, that all sounds good, --thread is on by default you don't have >> to specify it (it also does not hurt). > > Thanks, I'll remove --thread. ?I prefer to keep the default implicit (avoids > having to track them). You need --gcrootfinder=shadowstack for clang-based build, however the resulting executable *is* slower, so it's not advised to use it with gcc From naylor.b.david at gmail.com Tue Dec 13 18:50:10 2011 From: naylor.b.david at gmail.com (David Naylor) Date: Tue, 13 Dec 2011 19:50:10 +0200 Subject: [pypy-dev] pypy-1.7 and CLI Message-ID: <201112131950.13849.naylor.b.david@gmail.com> Hi, Does pypy-1.7 support translating to the CLI backend? On FreeBSD I get the following error: # mono --version Mono JIT compiler version 2.10.6 (tarball Thu Nov 24 17:06:01 UTC 2011) Copyright (C) 2002-2011 Novell, Inc, Xamarin, Inc and Contributors. www.mono-project.com TLS: normal SIGSEGV: normal Notification: kqueue Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: Included Boehm (with typed GC and Parallel Mark) # /usr/local/bin/pypy translate.py --backend=cli -Ojit targetpypystandalone.py [translation:ERROR] File "/pypy/translator/cli/query.py", line 25, in get_cli_class [translation:ERROR] return desc.get_cliclass() [translation:ERROR] File "/pypy/translator/cli/query.py", line 169, in get_cliclass [translation:ERROR] BASETYPE = get_ootype(self.BaseType) [translation:ERROR] File "/pypy/translator/cli/query.py", line 91, in get_ootype [translation:ERROR] cliclass = get_cli_class(name) [translation:ERROR] File "/pypy/translator/cli/query.py", line 25, in get_cli_class [translation:ERROR] return desc.get_cliclass() [translation:ERROR] File "/pypy/translator/cli/query.py", line 178, in get_cliclass [translation:ERROR] _static_meth, ootype.StaticMethod) [translation:ERROR] File "/pypy/translator/cli/query.py", line 197, in group_methods [translation:ERROR] res[name] = overload(*meths, **attrs) [translation:ERROR] File "/pypy/translator/cli/dotnet.py", line 242, in __init__ [translation:ERROR] self._resolver = resolver(overloadings) [translation:ERROR] File "/pypy/rpython/ootypesystem/ootype.py", line 1290, in __init__ [translation:ERROR] self._check_overloadings() [translation:ERROR] File "/pypy/rpython/ootypesystem/ootype.py", line 1297, in _check_overloadings [translation:ERROR] raise TypeError, 'Bad overloading' [translation:ERROR] TypeError: Bad overloading Regards, David -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From fijall at gmail.com Tue Dec 13 18:52:32 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 19:52:32 +0200 Subject: [pypy-dev] pypy-1.7 and CLI In-Reply-To: <201112131950.13849.naylor.b.david@gmail.com> References: <201112131950.13849.naylor.b.david@gmail.com> Message-ID: On Tue, Dec 13, 2011 at 7:50 PM, David Naylor wrote: > Hi, > > Does pypy-1.7 support translating to the CLI backend? The short answer is no :( From arigo at tunes.org Tue Dec 13 18:57:27 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 13 Dec 2011 18:57:27 +0100 Subject: [pypy-dev] Non standard ASTs In-Reply-To: References: Message-ID: Hi, On Tue, Dec 13, 2011 at 16:29, Timothy Baldridge wrote: > One last question then. What bytecodes in Python are strictly > unsupported in rPython. I assume YIELD is the major one...are there > any others? You'll have to derive from http://doc.pypy.org/en/latest/coding-guide.html#id1 which bytecodes are always, sometimes, or never supported. We are not thinking at this level usually. A function can even contain EXEC_STMT, DELETE_GLOBAL or BUILD_CLASS, as long as the corresponding parts are not reachable from RPython. A bient?t, Armin. From arigo at tunes.org Tue Dec 13 18:58:59 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 13 Dec 2011 18:58:59 +0100 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: References: <201112131855.11038.naylor.b.david@gmail.com> <201112131924.53737.naylor.b.david@gmail.com> Message-ID: Hi Fijal, On Tue, Dec 13, 2011 at 18:28, Maciej Fijalkowski wrote: > You need --gcrootfinder=shadowstack for clang-based build, however the > resulting executable *is* slower, so it's not advised to use it with > gcc Right now, in light of the many issues reported, --gcrootfinder=shadowstack is the default on any system different than Linux. A bient?t, Armin. From bokr at oz.net Tue Dec 13 19:09:26 2011 From: bokr at oz.net (Bengt Richter) Date: Tue, 13 Dec 2011 19:09:26 +0100 Subject: [pypy-dev] No pypy Tkinter demo hello world here ... Message-ID: Thought JFTHOI I would try pypy on the simplest demo: [18:05 ~/src/Python-2.7.2/Demo/tkinter/matt]$ cat -n 00-HELLO-WORLD.py 1 from Tkinter import * 2 3 # note that there is no explicit call to start Tk. 4 # Tkinter is smart enough to start the system if it's not already going. 5 6 class Test(Frame): 7 def printit(self): 8 print "hi"py 9 10 def createWidgets(self): 11 self.QUIT = Button(self, text='QUIT', foreground='red', 12 command=self.quit) 13 14 self.QUIT.pack(side=LEFT, fill=BOTH) 15 16 # a hello button 17 self.hi_there = Button(self, text='Hello', 18 command=self.printit) 19 self.hi_there.pack(side=LEFT) 20 21 def __init__(self, master=None): 22 Frame.__init__(self, master) 23 Pack.config(self) 24 self.createWidgets() 25 26 test = Test() 27 test.mainloop() [18:05 ~/src/Python-2.7.2/Demo/tkinter/matt]$ pypy 00-HELLO-WORLD.py pypy: /usr/lib/libcrypto.so.0.9.8: no version information available (required by pypy) pypy: /usr/lib/libssl.so.0.9.8: no version information available (required by pypy) Traceback (most recent call last): File "app_main.py", line 53, in run_toplevel File "00-HELLO-WORLD.py", line 1, in from Tkinter import * File "/home/bokr/pypy/pypy-c-jit-43780-b590cf6de419-linux/lib-python/2.7/lib-tk/Tkinter.py", line 39, in import _tkinter # If this fails your Python may not be configured for Tk ImportError: No module named _tkinter [18:05 ~/src/Python-2.7.2/Demo/tkinter/matt]$ I guess my "Python may not be configured for Tk" ;-) Is there an easy fix for this? I built my pypy from source, using the simplest config/make/install as specified in the tarball -- IIRC: [18:05 ~/src/Python-2.7.2/Demo/tkinter/matt]$ pypy --version pypy: /usr/lib/libcrypto.so.0.9.8: no version information available (required by pypy) pypy: /usr/lib/libssl.so.0.9.8: no version information available (required by pypy) Python 2.7.1 (b590cf6de419, Apr 30 2011, 02:00:38) [PyPy 1.5.0-alpha0 with GCC 4.4.3] Also a fix for the crypto / ssl messages, one way or another? Did I just miss some config options? Sorry to be so lazy and just ask ;-/ Decided to send it anyway ... Regards, Bengt Richter From amauryfa at gmail.com Tue Dec 13 19:13:40 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 13 Dec 2011 19:13:40 +0100 Subject: [pypy-dev] No pypy Tkinter demo hello world here ... In-Reply-To: References: Message-ID: 2011/12/13 Bengt Richter > Also a fix for the crypto / ssl messages, one way or another? A quick Google search found this: http://stackoverflow.com/questions/137773/what-does-the-no-version-information-available-error-from-linux-dynamic-linker The "easy" solution is probably to recompile pypy yourself, The not-so-easy solution is to have a pypy distribution that comes with a script to relink the application on the target machines. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From naylor.b.david at gmail.com Tue Dec 13 19:31:55 2011 From: naylor.b.david at gmail.com (David Naylor) Date: Tue, 13 Dec 2011 20:31:55 +0200 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: References: <201112131924.53737.naylor.b.david@gmail.com> Message-ID: <201112132031.59300.naylor.b.david@gmail.com> On Tuesday, 13 December 2011 19:28:07 Maciej Fijalkowski wrote: > On Tue, Dec 13, 2011 at 7:24 PM, David Naylor wrote: > > On Tuesday, 13 December 2011 19:03:09 Maciej Fijalkowski wrote: > >> On Tue, Dec 13, 2011 at 6:55 PM, David Naylor > > > > wrote: > >> > On Tuesday, 13 December 2011 18:16:26 Maciej Fijalkowski wrote: > >> >> On Tue, Dec 13, 2011 at 6:14 PM, Maciej Fijalkowski > >> >> > >> > > >> > wrote: > >> >> > Hi David, CC pypy-dev > >> >> > > >> >> > Can you explain the choice of options here: > >> >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev= > >> >> > 1.1 ;co ntent-type=text%2Fplain;only_with_tag=HEAD > >> >> > > >> >> > Especially, the objspace, gc and gcrootfinder choices. > >> >> > > >> >> > Cheers, > >> >> > fijal > >> > > >> > Hi > >> > > >> >> I guess I simply mistaken examples for real code, sorry > >> > > >> > For reference, the predefined options are at: > >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/files/bsd.pypy.i > >> > nst .mk?rev=1.1 > >> > > >> > Currently there are the default, sandbox and CLI predefined options. > >> > The CLI currently is not supported (WIP). > >> > > >> > Regards > >> > >> Cool, that all sounds good, --thread is on by default you don't have > >> to specify it (it also does not hurt). > > > > Thanks, I'll remove --thread. I prefer to keep the default implicit > > (avoids having to track them). > > You need --gcrootfinder=shadowstack for clang-based build, however the > resulting executable *is* slower, so it's not advised to use it with > gcc It appears the shadowstack is not the default and that asmgcc does not work with gcc under FreeBSD: # gcc --version gcc (GCC) 4.2.1 20070831 patched [FreeBSD] Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # /usr/local/bin/pypy translate.py --source --gcrootfinder=asmgcc -Ojit targetpypystandalone.py python /pypy/translator/c/gcc/trackgcroot.py -t module__sre_interp_sre.s > module__sre_interp_sre.gctmp Traceback (most recent call last): File "/pypy/translator/c/gcc/trackgcroot.py", line 2008, in tracker.process(f, g, filename=fn) File "/pypy/translator/c/gcc/trackgcroot.py", line 1901, in process tracker = parser.process_function(lines, filename) File "/pypy/translator/c/gcc/trackgcroot.py", line 1418, in process_function table = tracker.computegcmaptable(self.verbose) File "/pypy/translator/c/gcc/trackgcroot.py", line 60, in computegcmaptable self.trackgcroots() File "/pypy/translator/c/gcc/trackgcroot.py", line 335, in trackgcroots self.walk_instructions_backwards(walker, insn, loc) File "/pypy/translator/c/gcc/trackgcroot.py", line 354, in walk_instructions_backwards for prevstate in walker(insn, state): File "/pypy/translator/c/gcc/trackgcroot.py", line 327, in walker source = insn.source_of(loc, tag) File "/pypy/translator/c/gcc/instruction.py", line 130, in source_of (localvar,)) AssertionError: must come from an argument to the function, got <-72;esp> gmake: *** [module__sre_interp_sre.gcmap] Error 1 I remember testing other versions of gcc with the same result. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: This is a digitally signed message part. URL: From fijall at gmail.com Tue Dec 13 19:32:08 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 13 Dec 2011 20:32:08 +0200 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: <201112132031.59300.naylor.b.david@gmail.com> References: <201112131924.53737.naylor.b.david@gmail.com> <201112132031.59300.naylor.b.david@gmail.com> Message-ID: On Tue, Dec 13, 2011 at 8:31 PM, David Naylor wrote: > On Tuesday, 13 December 2011 19:28:07 Maciej Fijalkowski wrote: >> On Tue, Dec 13, 2011 at 7:24 PM, David Naylor wrote: >> > On Tuesday, 13 December 2011 19:03:09 Maciej Fijalkowski wrote: >> >> On Tue, Dec 13, 2011 at 6:55 PM, David Naylor >> > >> > wrote: >> >> > On Tuesday, 13 December 2011 18:16:26 Maciej Fijalkowski wrote: >> >> >> On Tue, Dec 13, 2011 at 6:14 PM, Maciej Fijalkowski >> >> >> >> >> > >> >> > wrote: >> >> >> > Hi David, CC pypy-dev >> >> >> > >> >> >> > Can you explain the choice of options here: >> >> >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/Makefile?rev= >> >> >> > 1.1 ;co ntent-type=text%2Fplain;only_with_tag=HEAD >> >> >> > >> >> >> > Especially, the objspace, gc and gcrootfinder choices. >> >> >> > >> >> >> > Cheers, >> >> >> > fijal >> >> > >> >> > Hi >> >> > >> >> >> I guess I simply mistaken examples for real code, sorry >> >> > >> >> > For reference, the predefined options are at: >> >> > http://www.freebsd.org/cgi/cvsweb.cgi/ports/lang/pypy/files/bsd.pypy.i >> >> > nst .mk?rev=1.1 >> >> > >> >> > Currently there are the default, sandbox and CLI predefined options. >> >> > ?The CLI currently is not supported (WIP). >> >> > >> >> > Regards >> >> >> >> Cool, that all sounds good, --thread is on by default you don't have >> >> to specify it (it also does not hurt). >> > >> > Thanks, I'll remove --thread. ?I prefer to keep the default implicit >> > (avoids having to track them). >> >> You need --gcrootfinder=shadowstack for clang-based build, however the >> resulting executable *is* slower, so it's not advised to use it with >> gcc > > It appears the shadowstack is not the default and that asmgcc does not work with gcc under FreeBSD: > # gcc --version > gcc (GCC) 4.2.1 20070831 patched [FreeBSD] > Copyright (C) 2007 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. ?There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > # /usr/local/bin/pypy translate.py --source --gcrootfinder=asmgcc -Ojit ?targetpypystandalone.py > > python /pypy/translator/c/gcc/trackgcroot.py -t module__sre_interp_sre.s > module__sre_interp_sre.gctmp > Traceback (most recent call last): > ?File "/pypy/translator/c/gcc/trackgcroot.py", line 2008, in > ? ?tracker.process(f, g, filename=fn) > ?File "/pypy/translator/c/gcc/trackgcroot.py", line 1901, in process > ? ?tracker = parser.process_function(lines, filename) > ?File "/pypy/translator/c/gcc/trackgcroot.py", line 1418, in process_function > ? ?table = tracker.computegcmaptable(self.verbose) > ?File "/pypy/translator/c/gcc/trackgcroot.py", line 60, in computegcmaptable > ? ?self.trackgcroots() > ?File "/pypy/translator/c/gcc/trackgcroot.py", line 335, in trackgcroots > ? ?self.walk_instructions_backwards(walker, insn, loc) > ?File "/pypy/translator/c/gcc/trackgcroot.py", line 354, in walk_instructions_backwards > ? ?for prevstate in walker(insn, state): > ?File "/pypy/translator/c/gcc/trackgcroot.py", line 327, in walker > ? ?source = insn.source_of(loc, tag) > ?File "/pypy/translator/c/gcc/instruction.py", line 130, in source_of > ? ?(localvar,)) > AssertionError: must come from an argument to the function, got <-72;esp> > gmake: *** [module__sre_interp_sre.gcmap] Error 1 > > I remember testing other versions of gcc with the same result. than shadowstack *should* be default. From amauryfa at gmail.com Tue Dec 13 23:42:51 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 13 Dec 2011 23:42:51 +0100 Subject: [pypy-dev] pypy-1.7 and CLI In-Reply-To: <201112131950.13849.naylor.b.david@gmail.com> References: <201112131950.13849.naylor.b.david@gmail.com> Message-ID: 2011/12/13 David Naylor > Does pypy-1.7 support translating to the CLI backend? > [...] > [translation:ERROR] File "/pypy/rpython/ootypesystem/ootype.py", line > 1297, in _check_overloadings > [translation:ERROR] raise TypeError, 'Bad overloading' > [translation:ERROR] TypeError: Bad overloading > This particular error occurs on 64bit architecture (where long == longlong) It seems that the CLI backend is not yet ready for 64bit. This is probably not too difficult to fix, patches are welcome! -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Wed Dec 14 08:54:54 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 14 Dec 2011 08:54:54 +0100 Subject: [pypy-dev] pypy-1.7 and CLI In-Reply-To: References: <201112131950.13849.naylor.b.david@gmail.com> Message-ID: <4EE8564E.4050800@gmail.com> On 12/13/2011 11:42 PM, Amaury Forgeot d'Arc wrote: > This particular error occurs on 64bit architecture (where long == longlong) > It seems that the CLI backend is not yet ready for 64bit. > This is probably not too difficult to fix, patches are welcome! actually, it is not so easy. The problem is "native" CLI integer is 32 bit, so we want to translate lltype.Signed to int32, even on 64 bit. But when translating on 64 bit, the translation toolchain thinks that lltype.Signed is rffi.LONG, and things are confused. Hopefully, the work that Christian is doing for win64 will help the CLI and JVM backends too. ciao, Anto From anto.cuni at gmail.com Wed Dec 14 08:59:09 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 14 Dec 2011 08:59:09 +0100 Subject: [pypy-dev] No pypy Tkinter demo hello world here ... In-Reply-To: References: Message-ID: <4EE8574D.3010505@gmail.com> Hello Bengts On 12/13/2011 07:09 PM, Bengt Richter wrote: > Thought JFTHOI I would try pypy on the simplest demo: > > [18:05 ~/src/Python-2.7.2/Demo/tkinter/matt]$ cat -n 00-HELLO-WORLD.py [cut] > import _tkinter # If this fails your Python may not be configured for Tk > ImportError: No module named _tkinter > [18:05 ~/src/Python-2.7.2/Demo/tkinter/matt]$ > > I guess my "Python may not be configured for Tk" ;-) you need to install tkinter-pypy: http://morepypy.blogspot.com/2011/04/using-tkinter-and-idle-with-pypy.html https://bitbucket.org/pypy/tkinter note that setup.py hardcode the Tk version number to 8.4. You might need to manually modify it to 8.5 depending on which version is installed on your system. ciao, Anto From arigo at tunes.org Wed Dec 14 13:13:03 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2011 13:13:03 +0100 Subject: [pypy-dev] ctypes tests Message-ID: Hi all, I think we could decide by now what to do with the remaining 34 @xfail tests of ctypes. It seems nobody noticed that we don't really implement the full CPython-2.7-compatible ctypes. (As far as I'm concerned, it adds "features" mostly in the form of more and more obscure special cases.) So what do you say about being fine with the current 2.5-compatible situation and silencing the failure on buildbot? http://buildbot.pypy.org/summary/longrepr?testname=unmodified&builder=pypy-c-jit-linux-x86-32&build=1159&mod=lib-python.2.7.test.test_ctypes (Note that there are two failures in the link above. We never noticed when a second failure appeared in addition to the classical xfail one. If anything, it shows that the current situation is bogus.) A bient?t, Armin. From fijall at gmail.com Wed Dec 14 13:16:43 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 14 Dec 2011 14:16:43 +0200 Subject: [pypy-dev] ctypes tests In-Reply-To: References: Message-ID: On Wed, Dec 14, 2011 at 2:13 PM, Armin Rigo wrote: > Hi all, > > I think we could decide by now what to do with the remaining 34 @xfail > tests of ctypes. ?It seems nobody noticed that we don't really > implement the full CPython-2.7-compatible ctypes. ?(As far as I'm > concerned, it adds "features" mostly in the form of more and more > obscure special cases.) ?So what do you say about being fine with the > current 2.5-compatible situation and silencing the failure on > buildbot? > > ?http://buildbot.pypy.org/summary/longrepr?testname=unmodified&builder=pypy-c-jit-linux-x86-32&build=1159&mod=lib-python.2.7.test.test_ctypes > > (Note that there are two failures in the link above. ?We never noticed > when a second failure appeared in addition to the classical xfail one. > ?If anything, it shows that the current situation is bogus.) > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Hi There are multiple bug reports about missing features in ctypes. Can you look if they're relevant to failures or not? Cheers, fijal From arigo at tunes.org Wed Dec 14 13:17:23 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2011 13:17:23 +0100 Subject: [pypy-dev] No pypy Tkinter demo hello world here ... In-Reply-To: References: Message-ID: Hi, On Tue, Dec 13, 2011 at 19:09, Bengt Richter wrote: > pypy: /usr/lib/libcrypto.so.0.9.8: no version information available > (required by pypy) > pypy: /usr/lib/libssl.so.0.9.8: no version information available (required > by pypy) This was fixed after the 1.7 release. Any binary linking openssl that you build on Debian Linux and use on another Linux system is going the print these warnings, that can be safely ignored. The nightly builds are made by linking with a hand-compiled version of openssl instead. Get them on http://buildbot.pypy.org/nightly/trunk/ (just don't get today's version, as it is subtly broken). A bient?t, Armin. From arigo at tunes.org Wed Dec 14 13:27:08 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2011 13:27:08 +0100 Subject: [pypy-dev] ctypes tests In-Reply-To: References: Message-ID: Hi Fijal, On Wed, Dec 14, 2011 at 13:16, Maciej Fijalkowski wrote: > There are multiple bug reports about missing features in ctypes. Can > you look if they're relevant to failures or not? Good point. Some of them are. Still, wouldn't it be better to hide the failure? It doesn't prevent us from actually taking in bug reports and fixing them. It's mostly a point of internal workflow: this is a failure that will likely always remain, because I bet we'll never get around to fix all 34 obscure cases (and it hides regressions). A bient?t, Armin. From fijall at gmail.com Wed Dec 14 13:32:44 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 14 Dec 2011 14:32:44 +0200 Subject: [pypy-dev] ctypes tests In-Reply-To: References: Message-ID: On Wed, Dec 14, 2011 at 2:27 PM, Armin Rigo wrote: > Hi Fijal, > > On Wed, Dec 14, 2011 at 13:16, Maciej Fijalkowski wrote: >> There are multiple bug reports about missing features in ctypes. Can >> you look if they're relevant to failures or not? > > Good point. ?Some of them are. ?Still, wouldn't it be better to hide > the failure? ?It doesn't prevent us from actually taking in bug > reports and fixing them. ?It's mostly a point of internal workflow: > this is a failure that will likely always remain, because I bet we'll > never get around to fix all 34 obscure cases (and it hides regressions). > > > A bient?t, > > Armin. I'm fine with hiding them, would be good to fix the bugs reported at least though. From anto.cuni at gmail.com Wed Dec 14 13:34:15 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 14 Dec 2011 13:34:15 +0100 Subject: [pypy-dev] ctypes tests In-Reply-To: References: Message-ID: <4EE897C7.1050905@gmail.com> On 12/14/2011 01:27 PM, Armin Rigo wrote: > Hi Fijal, > > On Wed, Dec 14, 2011 at 13:16, Maciej Fijalkowski wrote: >> There are multiple bug reports about missing features in ctypes. Can >> you look if they're relevant to failures or not? > > Good point. Some of them are. Still, wouldn't it be better to hide > the failure? It doesn't prevent us from actually taking in bug > reports and fixing them. It's mostly a point of internal workflow: > this is a failure that will likely always remain, because I bet we'll > never get around to fix all 34 obscure cases (and it hides regressions). I think I am the one to be biased for the current situation :-). I agree with Armin that the current situation is "good enough", and that we should fix/modify ctypes only if someone actually reports an issue. +1 for killing/skipping the xfailing tests. ciao, Anto From arigo at tunes.org Wed Dec 14 14:51:44 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2011 14:51:44 +0100 Subject: [pypy-dev] ctypes tests In-Reply-To: <4EE897C7.1050905@gmail.com> References: <4EE897C7.1050905@gmail.com> Message-ID: Skipped. Armin From arigo at tunes.org Wed Dec 14 14:52:57 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2011 14:52:57 +0100 Subject: [pypy-dev] Sprint in Leysin? Message-ID: Hi all, Who would be interested in the next sprint being in Leysin? Some time around the 2nd half of January? A bient?t, Armin. From arigo at tunes.org Wed Dec 14 14:55:11 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2011 14:55:11 +0100 Subject: [pypy-dev] Free BSD pypy port In-Reply-To: References: <201112131924.53737.naylor.b.david@gmail.com> <201112132031.59300.naylor.b.david@gmail.com> Message-ID: Hi, On Tue, Dec 13, 2011 at 19:32, Maciej Fijalkowski wrote: > than shadowstack *should* be default. if sys.platform.startswith("linux"): DEFL_ROOTFINDER_WITHJIT = "asmgcc" else: DEFL_ROOTFINDER_WITHJIT = "shadowstack" Unless I'm missing something, shadowstack *is* the default on any platform != linux. A bient?t, Armin. From holger at merlinux.eu Wed Dec 14 16:54:37 2011 From: holger at merlinux.eu (holger krekel) Date: Wed, 14 Dec 2011 15:54:37 +0000 Subject: [pypy-dev] ctypes tests In-Reply-To: References: <4EE897C7.1050905@gmail.com> Message-ID: <20111214155437.GB27920@merlinux.eu> On Wed, Dec 14, 2011 at 14:51 +0100, Armin Rigo wrote: > Skipped. Just in case you don't know: you can do @xfail(reason="obscure corner case not worth fixing ATM", run=False) to get the same effect as skipping while more clearly marking and reporting the tests as expected to fail. This would be more consistent with the practise to only use skips for tests where a dependency/wrong platform etc. prevents the running of a test. holger From arigo at tunes.org Wed Dec 14 17:49:39 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2011 17:49:39 +0100 Subject: [pypy-dev] ctypes tests In-Reply-To: <20111214155437.GB27920@merlinux.eu> References: <4EE897C7.1050905@gmail.com> <20111214155437.GB27920@merlinux.eu> Message-ID: Hi Holger, On Wed, Dec 14, 2011 at 16:54, holger krekel wrote: > Just in case you don't know: you can do > > ? ?@xfail(reason="obscure corner case not worth fixing ATM", run=False) We don't have the py lib's xfail() here; we are not using the py lib in tests of the standard library of python. Armin From holger at merlinux.eu Wed Dec 14 18:05:52 2011 From: holger at merlinux.eu (holger krekel) Date: Wed, 14 Dec 2011 17:05:52 +0000 Subject: [pypy-dev] ctypes tests In-Reply-To: References: <4EE897C7.1050905@gmail.com> <20111214155437.GB27920@merlinux.eu> Message-ID: <20111214170552.GC27920@merlinux.eu> On Wed, Dec 14, 2011 at 17:49 +0100, Armin Rigo wrote: > Hi Holger, > > On Wed, Dec 14, 2011 at 16:54, holger krekel wrote: > > Just in case you don't know: you can do > > > > ? ?@xfail(reason="obscure corner case not worth fixing ATM", run=False) > > We don't have the py lib's xfail() here; we are not using the py lib > in tests of the standard library of python. Ah, right. Where is Michael when you need him ... holger From anto.cuni at gmail.com Thu Dec 15 10:35:36 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 15 Dec 2011 10:35:36 +0100 Subject: [pypy-dev] Sprint in Leysin? In-Reply-To: References: Message-ID: <4EE9BF68.5020206@gmail.com> Hi Armin, On 12/14/2011 02:52 PM, Armin Rigo wrote: > Hi all, > > Who would be interested in the next sprint being in Leysin? Some time > around the 2nd half of January? I am :-) The best period for me is sometime between the 9th and the 20th. ciao, Anto From fijall at gmail.com Thu Dec 15 10:59:48 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 15 Dec 2011 11:59:48 +0200 Subject: [pypy-dev] Sprint in Leysin? In-Reply-To: <4EE9BF68.5020206@gmail.com> References: <4EE9BF68.5020206@gmail.com> Message-ID: On Thu, Dec 15, 2011 at 11:35 AM, Antonio Cuni wrote: > Hi Armin, > > On 12/14/2011 02:52 PM, Armin Rigo wrote: >> Hi all, >> >> Who would be interested in the next sprint being in Leysin? ?Some time >> around the 2nd half of January? > > I am :-) > > The best period for me is sometime between the 9th and the 20th. > > ciao, > Anto > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Hello. While I'm not particularly interested in Leysin now (tad too far :), maybe we should brainstorm an idea of doing a sprint on the other side of the globe (read Cape Town) some time in the future? Cheers, fijal From ned at nedbatchelder.com Fri Dec 16 05:15:48 2011 From: ned at nedbatchelder.com (Ned Batchelder) Date: Thu, 15 Dec 2011 23:15:48 -0500 Subject: [pypy-dev] New sandbox failure: pypy__float2longlong Message-ID: <4EEAC5F4.8070100@nedbatchelder.com> I updated my sandbox work to yesterday's tip, and now I cannot import a number of stdlib modules that used to work fine: ~/pypy> ./pypy/translator/sandbox/pypy_interact.py -q --heapsize=64m pypy-c Warning: cannot find your CPU L2 cache size in /proc/cpuinfo 'import site' failed Python 2.7.1 (7d73e99929bb, Dec 14 2011, 02:24:28) [PyPy 1.7.1-dev0 with GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``Du wirst eben genau das erreichen, woran keiner glaubt'' >>>> import struct Not Implemented: sandboxing for external function 'pypy__float2longlong' Traceback (most recent call last): File "", line 1, in RuntimeError >>>> The modules I've found that cannot be imported are copy, random, and struct. Of course many other modules use those, so lots of stuff has stopped working (json being the important one in my case). Can someone give me clues? (PS: the -q switch to pypy_interact is in my nedbat-sandbox-2 branch, it just suppresses all the logging.) --Ned. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Dec 16 06:18:00 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 16 Dec 2011 06:18:00 +0100 Subject: [pypy-dev] New sandbox failure: pypy__float2longlong In-Reply-To: <4EEAC5F4.8070100@nedbatchelder.com> References: <4EEAC5F4.8070100@nedbatchelder.com> Message-ID: Hi Ned, On Fri, Dec 16, 2011 at 05:15, Ned Batchelder wrote: > Not Implemented: sandboxing for external function 'pypy__float2longlong' Ah, we made new uses of float2longlong(), notably in the "is" operator on floats. Fixed in d9b372cf25b0. A bient?t, Armin. From romain.py at gmail.com Fri Dec 16 23:18:20 2011 From: romain.py at gmail.com (Romain Guillebert) Date: Fri, 16 Dec 2011 23:18:20 +0100 Subject: [pypy-dev] Sprint in Leysin? In-Reply-To: References: Message-ID: Hi Armin I would be interested too and have nothing planed in January so any time is fine (unless something unpredictable happens :). Romain On Wed, Dec 14, 2011 at 2:52 PM, Armin Rigo wrote: > Hi all, > > Who would be interested in the next sprint being in Leysin? Some time > around the 2nd half of January? > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Dec 17 21:35:09 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 17 Dec 2011 21:35:09 +0100 Subject: [pypy-dev] Sprint in Leysin? In-Reply-To: References: Message-ID: Hi all, Thanks for the answers! Unless someone has other preferences, I will try to book the week 14-20th. Armin From lac at openend.se Sun Dec 18 14:06:50 2011 From: lac at openend.se (Laura Creighton) Date: Sun, 18 Dec 2011 14:06:50 +0100 Subject: [pypy-dev] OSCON deadline for proposals Jan 12. Message-ID: <201112181306.pBID6odi032576@theraft.openend.se> ------- Forwarded Message To: conferences at python.org From: Aahz DEADLINE Thursday January 12 OSCON (O'Reilly Open Source Convention), the premier Open Source gathering, will be held in Portland, OR July 16-20. We're looking for people to deliver tutorials and shorter presentations. http://www.oscon.com/oscon2012 http://www.oscon.com/oscon2012/public/cfp/197 Hope to see you there! - -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ ------- End of Forwarded Message From fijall at gmail.com Sun Dec 18 16:26:05 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 18 Dec 2011 17:26:05 +0200 Subject: [pypy-dev] OSCON deadline for proposals Jan 12. In-Reply-To: <201112181306.pBID6odi032576@theraft.openend.se> References: <201112181306.pBID6odi032576@theraft.openend.se> Message-ID: On Sun, Dec 18, 2011 at 3:06 PM, Laura Creighton wrote: > > ------- Forwarded Message > > To: conferences at python.org > From: Aahz > > DEADLINE Thursday January 12 > > OSCON (O'Reilly Open Source Convention), the premier Open Source > gathering, will be held in Portland, OR July 16-20. ?We're looking for > people to deliver tutorials and shorter presentations. > > http://www.oscon.com/oscon2012 > http://www.oscon.com/oscon2012/public/cfp/197 > > Hope to see you there! > - -- > Aahz (aahz at pythoncraft.com) ? ? ? ? ? <*> ? ? ? ? http://www.pythoncraft.com/ > ------- End of Forwarded Message > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Is PyPy really on topic for OSCON? Cheers, fijal From alex.gaynor at gmail.com Sun Dec 18 16:31:31 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 18 Dec 2011 09:31:31 -0600 Subject: [pypy-dev] OSCON deadline for proposals Jan 12. In-Reply-To: References: <201112181306.pBID6odi032576@theraft.openend.se> Message-ID: On Sun, Dec 18, 2011 at 9:26 AM, Maciej Fijalkowski wrote: > On Sun, Dec 18, 2011 at 3:06 PM, Laura Creighton wrote: > > > > ------- Forwarded Message > > > > To: conferences at python.org > > From: Aahz > > > > DEADLINE Thursday January 12 > > > > OSCON (O'Reilly Open Source Convention), the premier Open Source > > gathering, will be held in Portland, OR July 16-20. We're looking for > > people to deliver tutorials and shorter presentations. > > > > http://www.oscon.com/oscon2012 > > http://www.oscon.com/oscon2012/public/cfp/197 > > > > Hope to see you there! > > - -- > > Aahz (aahz at pythoncraft.com) <*> > http://www.pythoncraft.com/ > > ------- End of Forwarded Message > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > http://mail.python.org/mailman/listinfo/pypy-dev > > Is PyPy really on topic for OSCON? > > Cheers, > fijal > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > Sure, just about anything open source is. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From a at bostani.us Mon Dec 19 05:27:08 2011 From: a at bostani.us (Arman Bostani) Date: Sun, 18 Dec 2011 20:27:08 -0800 Subject: [pypy-dev] very slow ctypes callbacks Message-ID: Hi all, I believe this problem has been brought to your attention before. But, perhaps it something that can be added to the bug list. I'm using a proprietary Windows library which unfortunately uses callbacks a lot. I've attached a simple test case. Under pypy 1.7, this runs about 8 times slower than CPython 2.7. Best, -arman -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- from ctypes import * import timeit def py_cmp_func(a, b): return a[0] - b[0] CMPFUNC = CFUNCTYPE(c_int, POINTER(c_int), POINTER(c_int)) cmp_func = CMPFUNC(py_cmp_func) int_size = sizeof(c_int) LEN = 5 IntArray5 = c_int * LEN ia = IntArray5(5, 1, 7, 33, 99) qsort = cdll.msvcrt.qsort def f(): qsort(ia, LEN, int_size, cmp_func) print timeit.repeat("f()", "from __main__ import f", number=100000) From fijall at gmail.com Mon Dec 19 10:21:25 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 19 Dec 2011 11:21:25 +0200 Subject: [pypy-dev] very slow ctypes callbacks In-Reply-To: References: Message-ID: On Mon, Dec 19, 2011 at 6:27 AM, Arman Bostani wrote: > Hi all, > > I believe this problem has been brought to your attention before. But, > perhaps it something that can be added to the bug list. > > I'm using a?proprietary?Windows?library?which unfortunately uses callbacks a > lot. I've attached a simple test case. Under pypy 1.7, this runs about 8 > times slower than CPython 2.7. > > Best, -arman > Hi Arman. Thanks for the report! Sounds like a thing we would like to have a look at. Do you feel like filing a bug on bugs.pypy.org so it does not get forgotten? Cheers, fijal From a at bostani.us Mon Dec 19 14:30:12 2011 From: a at bostani.us (Arman Bostani) Date: Mon, 19 Dec 2011 05:30:12 -0800 Subject: [pypy-dev] very slow ctypes callbacks In-Reply-To: References: Message-ID: Done. Thanks a lot, -arman On Mon, Dec 19, 2011 at 1:21 AM, Maciej Fijalkowski wrote: > On Mon, Dec 19, 2011 at 6:27 AM, Arman Bostani wrote: > > Hi all, > > > > I believe this problem has been brought to your attention before. But, > > perhaps it something that can be added to the bug list. > > > > I'm using a proprietary Windows library which unfortunately uses > callbacks a > > lot. I've attached a simple test case. Under pypy 1.7, this runs about 8 > > times slower than CPython 2.7. > > > > Best, -arman > > > > Hi Arman. > > Thanks for the report! > > Sounds like a thing we would like to have a look at. Do you feel like > filing a bug on bugs.pypy.org so it does not get forgotten? > > Cheers, > fijal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brownan at gmail.com Wed Dec 21 15:45:03 2011 From: brownan at gmail.com (Andrew Brown) Date: Wed, 21 Dec 2011 09:45:03 -0500 Subject: [pypy-dev] Mixed pypy and cpython with multiprocessing Message-ID: Hello everyone, I've been playing around with distributed computing with multiprocessing on CPython, and thought I'd see if I could spin up some workers running pypy and have them connect to a server running CPython. Well, it didn't go so well. The client always gets an IOError: bad message length in the answer_challenge() function of connection.py. Traceback (most recent call last): > File "app_main.py", line 51, in run_toplevel > File "client1.py", line 6, in > m.connect() > File > "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/managers.py", line > 474, in connect > conn = Client(self._address, authkey=self._authkey) > File > "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/connection.py", > line 149, in Client > answer_challenge(c, authkey) > File > "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/connection.py", > line 383, in answer_challenge > message = connection.recv_bytes(256) # reject large message > IOError: bad message length You can try the simple example from the documentation: http://docs.python.org/library/multiprocessing.html#using-a-remote-manager to trigger this. Obviously CPython server and CPython client works, but I've also found that pypy server and pypy client works as well. It's only a mixed server and client that triggers this same error. (either way... pypy server + cpython client or cpython server + pypy client) Is this a bug? Or is multiprocessing not supposed to be compatible across implementations? I would think it is, since it's just sockets with pickled data, right? Thanks, Andrew Brown -------------- next part -------------- An HTML attachment was scrubbed... URL: From chef at ghum.de Wed Dec 21 16:06:07 2011 From: chef at ghum.de (Massa, Harald Armin) Date: Wed, 21 Dec 2011 16:06:07 +0100 Subject: [pypy-dev] Mixed pypy and cpython with multiprocessing In-Reply-To: References: Message-ID: Andrew, > > Is this a bug? Or is multiprocessing not supposed to be compatible across > implementations? I would think it is, since it's just sockets with pickled > data, right? > > As much as I know, pickles are not exactly a data-exchange format. Being that in a Pickle something quite similiar to the internal memory representation of Python objects is serialized. And the internal memory representation of cPython to the PyPy one. Best wishes Harald -- GHUM GmbH Harald Armin Massa Spielberger Stra?e 49 70435 Stuttgart 0173/9409607 Amtsgericht Stuttgart, HRB 734971 - persuadere. et programmare -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Dec 21 16:07:34 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 21 Dec 2011 16:07:34 +0100 Subject: [pypy-dev] Mixed pypy and cpython with multiprocessing In-Reply-To: References: Message-ID: Hi, 2011/12/21 Andrew Brown > Hello everyone, > > I've been playing around with distributed computing with multiprocessing > on CPython, and thought I'd see if I could spin up some workers running > pypy and have them connect to a server running CPython. > > Well, it didn't go so well. The client always gets an IOError: bad message > length in the answer_challenge() function of connection.py. > > Traceback (most recent call last): >> File "app_main.py", line 51, in run_toplevel >> File "client1.py", line 6, in >> m.connect() >> File >> "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/managers.py", line >> 474, in connect >> conn = Client(self._address, authkey=self._authkey) >> File >> "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/connection.py", >> line 149, in Client >> answer_challenge(c, authkey) >> File >> "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/connection.py", >> line 383, in answer_challenge >> message = connection.recv_bytes(256) # reject large message >> IOError: bad message length > > > You can try the simple example from the documentation: > http://docs.python.org/library/multiprocessing.html#using-a-remote-manager to > trigger this. Obviously CPython server and CPython client works, but I've > also found that pypy server and pypy client works as well. It's only a > mixed server and client that triggers this same error. (either way... pypy > server + cpython client or cpython server + pypy client) > > Is this a bug? Or is multiprocessing not supposed to be compatible across > implementations? I would think it is, since it's just sockets with pickled > data, right? > It's something I overlooked at the time: in pypy/module/_multiprocessing/interp_connection.py, the function do_send_string() has this short comment "# XXX htonl!". My bad. Of course it does not make any difference when client and servers run pypy on comparable architectures, but CPython correctly calls htonl, and ntohl when receiving. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Dec 21 16:20:33 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 21 Dec 2011 16:20:33 +0100 Subject: [pypy-dev] Mixed pypy and cpython with multiprocessing In-Reply-To: References: Message-ID: 2011/12/21 Massa, Harald Armin > Is this a bug? Or is multiprocessing not supposed to be compatible across >> implementations? I would think it is, since it's just sockets with pickled >> data, right? >> >> As much as I know, pickles are not exactly a data-exchange format. Being > that in a Pickle something quite similiar to the internal memory > representation of Python objects is serialized. > > And the internal memory representation of cPython to the PyPy one. > Pickle uses known byte order and sizes, and has a stable set of formats and opcodes. Pickles generated on one platform can be read on a different one; even with different Python versions, even with PyPy vs. CPython. Otherwise it's a bug, like this one :) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From brownan at gmail.com Wed Dec 21 16:27:11 2011 From: brownan at gmail.com (Andrew Brown) Date: Wed, 21 Dec 2011 10:27:11 -0500 Subject: [pypy-dev] Mixed pypy and cpython with multiprocessing In-Reply-To: References: Message-ID: On Wed, Dec 21, 2011 at 10:20 AM, Amaury Forgeot d'Arc wrote: > Pickle uses known byte order and sizes, and has a stable set of formats > and opcodes. > Pickles generated on one platform can be read on a different one; > even with different Python versions, even with PyPy vs. CPython. > > Otherwise it's a bug, like this one :) > > Neat! I found a bug! Would you like me to file an issue with the issue tracker? -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Wed Dec 21 16:28:10 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 21 Dec 2011 16:28:10 +0100 Subject: [pypy-dev] Mixed pypy and cpython with multiprocessing In-Reply-To: References: Message-ID: 2011/12/21 Andrew Brown > On Wed, Dec 21, 2011 at 10:20 AM, Amaury Forgeot d'Arc > wrote: > >> Pickle uses known byte order and sizes, and has a stable set of formats >> and opcodes. >> Pickles generated on one platform can be read on a different one; >> even with different Python versions, even with PyPy vs. CPython. >> >> Otherwise it's a bug, like this one :) >> >> Neat! I found a bug! Would you like me to file an issue with the issue > tracker? > Yes, please, before we forget it :-) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From brownan at gmail.com Wed Dec 21 16:43:07 2011 From: brownan at gmail.com (Andrew Brown) Date: Wed, 21 Dec 2011 10:43:07 -0500 Subject: [pypy-dev] Mixed pypy and cpython with multiprocessing In-Reply-To: References: Message-ID: Bug filed (#971 )! Thanks for taking a look! -Andrew On Wed, Dec 21, 2011 at 10:28 AM, Amaury Forgeot d'Arc wrote: > > > 2011/12/21 Andrew Brown > >> On Wed, Dec 21, 2011 at 10:20 AM, Amaury Forgeot d'Arc < >> amauryfa at gmail.com> wrote: >> >>> Pickle uses known byte order and sizes, and has a stable set of formats >>> and opcodes. >>> Pickles generated on one platform can be read on a different one; >>> even with different Python versions, even with PyPy vs. CPython. >>> >>> Otherwise it's a bug, like this one :) >>> >>> Neat! I found a bug! Would you like me to file an issue with the issue >> tracker? >> > > Yes, please, before we forget it :-) > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewfr_ice at yahoo.com Wed Dec 21 20:38:09 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Wed, 21 Dec 2011 11:38:09 -0800 (PST) Subject: [pypy-dev] The Prototyping Join Patterns with stackless.py Slides Message-ID: <1324496289.36414.YahooMailNeo@web120703.mail.ne1.yahoo.com> Hi Folks: I recently gave a rather hastily prepared talk at Montreal Python called "How to Solve a Problem like Santa Claus: Prototyping Join Patterns with stackless.py for Stackless Python." One can get the slides at? http://wp.me/pdqoq-46 more information is on my blog. http://andrewfr.wordpress.com once again, I demonstrate how stackless.py can be used to prototype concurrency features for Stackless Python. This time the feature is join patterns. I feel I have come up with a good compromise for implementing this powerful concurrency construct without breaking the Stackless Python API (however channel.balance does become unnecessary). More important though, is during the development, I started to come across several powerful high level and low level concurrency concepts that I think would benefit a future Stackless Python. What particularly pleases me is that we have gone, in little over a year, from a Stackless with no select (a common slight of Stackless Python in the Go Mailing list) to a Stackless version that can handily express stuff that would be rather verbose in Go. And I am the first to attempt that I have a half-baked knowledge of the underlying models for Stackless, greenlets, and now continuelets (I need to shore up my knowledge in these areas). All this would not have been possible without PyPy's stackless.py module. stackless.py rocks! In the weeks to come, I am going to finish off the current prototype, write more documentation, write more tests (just to make sure that I am out of the woods) and examples,? build a version that works with continuelets and post everything the Stackless Repository. If folks can accept something more raw and in progress, I would be happy to post the code in a bitbucket account. As was posted out by Richard and Christian in previous posts, the API is a particularly weak area. Anyhow, I would greatly appreciate feedback. Feedback helps the prototyping effects! Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewfr_ice at yahoo.com Thu Dec 22 01:27:39 2011 From: andrewfr_ice at yahoo.com (Andrew Francis) Date: Wed, 21 Dec 2011 16:27:39 -0800 (PST) Subject: [pypy-dev] Oops Re: The Prototyping Join Patterns with stackless.py Slides In-Reply-To: <1324496289.36414.YahooMailNeo@web120703.mail.ne1.yahoo.com> References: <1324496289.36414.YahooMailNeo@web120703.mail.ne1.yahoo.com> Message-ID: <1324513659.77112.YahooMailNeo@web120704.mail.ne1.yahoo.com> ________________________________ From: Andrew Francis To: "stackless at stackless.com" Cc: "pypy-dev at codespeak.net" Sent: Wednesday, December 21, 2011 2:38 PM Subject: [pypy-dev] The Prototyping Join Patterns with stackless.py Slides Hi Folks: A silly mistake: >And I am the first to attempt that I have a half-baked knowledge of the underlying models for Stackless, ?????????????????????????????????? ^^^^^^ Oops, I meant 'admit.' Cheers, Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Dec 22 14:55:05 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 22 Dec 2011 14:55:05 +0100 Subject: [pypy-dev] Sprint in Leysin? In-Reply-To: References: Message-ID: Hi all, I have made pre-reservations for the dates 15-22th (Sunday-Sunday). Hope to see you there :-) I will soon make it official, e.g. with a blog post. Armin From colin.kern at gmail.com Thu Dec 22 21:24:53 2011 From: colin.kern at gmail.com (Colin Kern) Date: Thu, 22 Dec 2011 15:24:53 -0500 Subject: [pypy-dev] Pypy memory usage Message-ID: Hi all, I have a program that uses the multiprocessing package to run a set of jobs. When I run it using the standard python interpreter on my computer, top shows 8 threads each using about 40M of memory, which remains fairly steady. When I use pypy, both 1.6 and 1.7, the 8 threads almost immediately show using 90-100M of memory, and then that continues to climb as the program runs. Each job runs a lot faster in pypy, but usually before all the jobs are done, the memory on the system is exhausted and swapping starts, which brings the execution speed to a crawl. Is this something anyone else has experienced? Thanks, Colin From fijall at gmail.com Thu Dec 22 23:20:41 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 23 Dec 2011 00:20:41 +0200 Subject: [pypy-dev] Pypy memory usage In-Reply-To: References: Message-ID: On Thu, Dec 22, 2011 at 10:24 PM, Colin Kern wrote: > Hi all, > > I have a program that uses the multiprocessing package to run a set of > jobs. When I run it using the standard python interpreter on my > computer, top shows 8 threads each using about 40M of memory, which > remains fairly steady. When I use pypy, both 1.6 and 1.7, the 8 > threads almost immediately show using 90-100M of memory, and then that > continues to climb as the program runs. Each job runs a lot faster in > pypy, but usually before all the jobs are done, the memory on the > system is exhausted and swapping starts, which brings the execution > speed to a crawl. Is this something anyone else has experienced? > > Thanks, > Colin > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Hi Colin. Thanks for the bug report, but we can't really help you without seeing the code. There has been some issues like this in the past, however most of them has been fixed, as far as we know. If you can isolate a preferably small example, we would be happy to help you. Cheers, fijal From arigo at tunes.org Fri Dec 23 14:46:37 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 23 Dec 2011 14:46:37 +0100 Subject: [pypy-dev] Generators in RPython Message-ID: Hi all, RPython now has support for generators. The support is minimal for now: * only pre-Python-2.5 generators: no .send(), no "yield" inside a "try:finally:" block, etc. * you cannot use a "for" loop to iterate over a generator-iterator. You have to use explicitly .next() and catch the StopIteration, for now. * two different generators produce generator-iterator objects that the annotator cannot unify, even if they happen to yield objects of the same type. You have to work around that limitation e.g. as shown in rpython/test/test_generator, test_cannot_merge. I still hope that this is enough to be useful to the Prolog and Converge interpreters :-) A bient?t, Armin. From fijall at gmail.com Fri Dec 23 22:40:57 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 23 Dec 2011 23:40:57 +0200 Subject: [pypy-dev] Generators in RPython In-Reply-To: References: Message-ID: On Fri, Dec 23, 2011 at 3:46 PM, Armin Rigo wrote: > Hi all, > > RPython now has support for generators. ?The support is minimal for now: > > * only pre-Python-2.5 generators: no .send(), no "yield" inside a > "try:finally:" block, etc. > > * you cannot use a "for" loop to iterate over a generator-iterator. > You have to use explicitly .next() and catch the StopIteration, for > now. > > * two different generators produce generator-iterator objects that the > annotator cannot unify, even if they happen to yield objects of the > same type. ?You have to work around that limitation e.g. as shown in > rpython/test/test_generator, test_cannot_merge. > > I still hope that this is enough to be useful to the Prolog and > Converge interpreters :-) > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev hooray! From papito.dit at gmail.com Sat Dec 24 14:46:34 2011 From: papito.dit at gmail.com (Michael Sioutis) Date: Sat, 24 Dec 2011 15:46:34 +0200 Subject: [pypy-dev] My experience with PyPy Message-ID: Hello! I haven't been following the list, cause I joined yesterday in order to share my impressions of PyPy here, so I don't know if these kinds of posts are frequent and annoying. I'm using PyPy the last few weeks in a qualitative spatial reasoner I am developing. The speedup is up to 10 times over CPython, and it grows even more as the input size grows bigger, but that's not what impressed me the most, because I was expecting that. You can find in my presentation here at slides 25 and 33 some comparison diagrams against C (red color) and C++ (blue color) implementations of analogous reasoners. I am using different data structures and slightly modified algorithms, so you should not consider that the comparison diagrams is upon the exact same piece of code. Long story short, you will find out that the pypy reasoner ranks in positions 1 and 2 respectively, and I believe it will also rank 1 in the second diagram if I grow the input even more (I currently don't have enough memory and the C implementation tops it at 900 nodes, so it would be unreasonable and unfair to run on my own). The impressive part of both diagrams is the scalability pypy offers me. It starts slower than the statically compiled languages, but the more you push the input sizes, the faster it goes compared to the other two. I started developing the reasoner in python not necessarily to be fast in timings, but because I want to go beyond the state of the art in qualitative spatial reasoning and present sth new, so python was the language of choice to do it fast, in terms of working hours :) This is future work, that will be based on the current reasoner. *Thank you* for being patient, if you actually took the time to read my story, and have a Merry Xmas! Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Dec 24 16:39:30 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 24 Dec 2011 17:39:30 +0200 Subject: [pypy-dev] My experience with PyPy In-Reply-To: References: Message-ID: On Sat, Dec 24, 2011 at 3:46 PM, Michael Sioutis wrote: > Hello! > > I haven't been?following?the list, cause I joined yesterday in order to > share my?impressions > of PyPy here, so I don't know if these kinds of posts are frequent and > annoying. Hi Michael They're more than welcome here. PyPy lacks a bit lots of success stories, so we definitely welcome each of them > > I'm using PyPy the last few weeks in a qualitative spatial reasoner I am > developing. The speedup is up to > 10 times over CPython, and it grows even more as the input size grows > bigger, but that's not what impressed > me the most, because I was expecting that. > > You can find in my presentation?here?at slides 25 and 33 some comparison > diagrams against C (red color) and C++ > (blue color)?implementations of analogous reasoners. > I am using different data structures and?slightly?modified algorithms, so > you should not consider that the comparison > diagrams is upon the exact same piece of code. > Long story short, you will find out that the pypy reasoner ranks in > positions 1 and 2 respectively, and I believe it will > also rank 1 in the second diagram if I grow the input even more (I currently > don't have enough memory and the C implementation > tops it at 900 nodes, so it would be unreasonable and unfair to run on my > own). > > The impressive part of both diagrams is the?scalability?pypy offers me. It > starts slower than the statically compiled languages, > but the more you push the input sizes, the faster it goes compared to the > other two. > > I started developing the reasoner in python not?necessarily?to be fast in > timings, but because I want to go beyond the state of the art in > qualitative?spatial reasoning and present sth new, so python was the > language of choice to do it fast, in terms of working hours :) > This is future work, that will be based on the current reasoner. > > Thank you?for being patient, if you actually took the time to read my story, > and have a Merry Xmas! > Mike > > This is all very impressive! The reason why pypy "scales" well is probably because JIT takes time to kick in, so for the short-running examples it does not run at the full speed. It's really cool to see PyPy enabling people to do *really cool stuff* in Python, that was not entirely possible before. Makes us want to work harder :) Thanks for sharing this! Cheers, fijal From arigo at tunes.org Tue Dec 27 18:05:57 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 27 Dec 2011 18:05:57 +0100 Subject: [pypy-dev] Leysin Winter Sprint Message-ID: ===================================================================== PyPy Leysin Winter Sprint (15-22nd January 2012) ===================================================================== The next PyPy sprint will be in Leysin, Switzerland, for the eighth time. This is a fully public sprint: newcomers and topics other than those proposed below are welcome. ------------------------------ Goals and topics of the sprint ------------------------------ * Py3k: work towards supporting Python 3 in PyPy * NumPyPy: work towards supporting the numpy module in PyPy * JIT backends: integrate tests for ARM; look at the PowerPC 64; maybe try again to write an LLVM- or GCC-based one * STM and STM-related topics; or the Concurrent Mark-n-Sweep GC * And as usual, the main side goal is to have fun in winter sports :-) We can take a day off for ski. ----------- Exact times ----------- The work days should be 15-21 January 2011 (Sunday-Saturday). The official plans are for people to arrive on the 14th or the 15th, and to leave on the 22nd. ----------------------- Location & Accomodation ----------------------- Leysin, Switzerland, "same place as before". Let me refresh your memory: both the sprint venue and the lodging will be in a very spacious pair of chalets built specifically for bed & breakfast: http://www.ermina.ch/. The place has a good ADSL Internet connexion with wireless installed. You can of course arrange your own lodging anywhere (as long as you are in Leysin, you cannot be more than a 15 minutes walk away from the sprint venue), but I definitely recommend lodging there too -- you won't find a better view anywhere else (though you probably won't get much worse ones easily, either :-) Please *confirm* that you are coming so that we can adjust the reservations as appropriate. The rate so far has been around 60 CHF a night all included in 2-person rooms, with breakfast. There are larger rooms too (less expensive) and maybe the possibility to get a single room if you really want to. Please register by Mercurial:: https://bitbucket.org/pypy/extradoc/ https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2012 or on the pypy-dev mailing list if you do not yet have check-in rights: http://mail.python.org/mailman/listinfo/pypy-dev You need a Swiss-to-(insert country here) power adapter. There will be some Swiss-to-EU adapters around -- bring a EU-format power strip if you have one. From ned at nedbatchelder.com Wed Dec 28 03:30:05 2011 From: ned at nedbatchelder.com (Ned Batchelder) Date: Tue, 27 Dec 2011 21:30:05 -0500 Subject: [pypy-dev] Use of marshal in the sandbox: is stdlib marshal OK? Message-ID: <4EFA7F2D.2000804@nedbatchelder.com> The sandbox uses pypy's own implementation of marshal. In pypy/translator/sandbox/sandlib.py is this comment: # Note: we use lib_pypy/marshal.py instead of the built-in marshal # for two reasons. The built-in module could be made to segfault # or be attackable in other ways by sending malicious input to # load(). Also, marshal.load(f) blocks with the GIL held when # f is a pipe with no data immediately avaialble, preventing the # _waiting_thread to run. I'd like to remove as many dependencies as possible from the sandbox code, so I'd like to explore the possibility of using the standard library marshal module. The first reason above is about crashing marshal with malicious input. To my thinking, we are in control of what data is marshaled, so we don't have to worry about malicious input. The untrusted Python code running in the sandbox doesn't have a way of sending marshaled data, so we don't have to worry that it will be used to attack the marshal module. The stdout of the untrusted Python code will become a string that is marshaled, but that doesn't provide a way for the untrusted code to attack the marshal module. Or have I missed something? The second reason I can't address, is this still a problem? What bad effects will we see if it is? --Ned. From blendmaster1024 at gmail.com Wed Dec 28 04:09:22 2011 From: blendmaster1024 at gmail.com (lahwran) Date: Tue, 27 Dec 2011 20:09:22 -0700 Subject: [pypy-dev] Use of marshal in the sandbox: is stdlib marshal OK? In-Reply-To: <4EFA7F2D.2000804@nedbatchelder.com> References: <4EFA7F2D.2000804@nedbatchelder.com> Message-ID: it will become an issue if there is a bug in the marshal code inside pypy-c-sandbox which is /creating/ the marshalled data, a bug that would allow a sandboxed program to alter the marshalled data in such a way that it can exploit the vulnerability of the stdlib marshal. Doesn't sound too likely, but in the spirit of having as many layers of security as possible, I propose simply bundling pypy's marshal.py with the sandbox. -- lahwran On Tue, Dec 27, 2011 at 7:30 PM, Ned Batchelder wrote: > The sandbox uses pypy's own implementation of marshal. ?In > pypy/translator/sandbox/sandlib.py is this comment: > > # Note: we use lib_pypy/marshal.py instead of the built-in marshal > # for two reasons. ?The built-in module could be made to segfault > # or be attackable in other ways by sending malicious input to > # load(). ?Also, marshal.load(f) blocks with the GIL held when > # f is a pipe with no data immediately avaialble, preventing the > # _waiting_thread to run. > > I'd like to remove as many dependencies as possible from the sandbox code, > so I'd like to explore the possibility of using the standard library marshal > module. > > The first reason above is about crashing marshal with malicious input. ?To > my thinking, we are in control of what data is marshaled, so we don't have > to worry about malicious input. ?The untrusted Python code running in the > sandbox doesn't have a way of sending marshaled data, so we don't have to > worry that it will be used to attack the marshal module. ?The stdout of the > untrusted Python code will become a string that is marshaled, but that > doesn't provide a way for the untrusted code to attack the marshal module. > ?Or have I missed something? > > The second reason I can't address, is this still a problem? ?What bad > effects will we see if it is? > > --Ned. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev From ned at nedbatchelder.com Wed Dec 28 15:34:28 2011 From: ned at nedbatchelder.com (Ned Batchelder) Date: Wed, 28 Dec 2011 09:34:28 -0500 Subject: [pypy-dev] Use of marshal in the sandbox: is stdlib marshal OK? In-Reply-To: References: <4EFA7F2D.2000804@nedbatchelder.com> Message-ID: <4EFB28F4.9010500@nedbatchelder.com> I guess that is a possibility, but another principle is to use well-used and widely-reviewed code where possible, no? I guess the problem is that built-in marshal isn't trying hard to protect itself against malicious data? The problem with "bundling pypy's marshal.py" is that it pulls in a lot of infrastructure modules, which bulks up the calling process. Maybe there's some low-hanging fruit there that we can trim. Any thoughts on the second issue? --Ned. On 12/27/2011 10:09 PM, lahwran wrote: > it will become an issue if there is a bug in the marshal code inside > pypy-c-sandbox which is /creating/ the marshalled data, a bug that > would allow a sandboxed program to alter the marshalled data in such a > way that it can exploit the vulnerability of the stdlib marshal. > Doesn't sound too likely, but in the spirit of having as many layers > of security as possible, I propose simply bundling pypy's marshal.py > with the sandbox. > > -- lahwran > > On Tue, Dec 27, 2011 at 7:30 PM, Ned Batchelder wrote: >> The sandbox uses pypy's own implementation of marshal. In >> pypy/translator/sandbox/sandlib.py is this comment: >> >> # Note: we use lib_pypy/marshal.py instead of the built-in marshal >> # for two reasons. The built-in module could be made to segfault >> # or be attackable in other ways by sending malicious input to >> # load(). Also, marshal.load(f) blocks with the GIL held when >> # f is a pipe with no data immediately avaialble, preventing the >> # _waiting_thread to run. >> >> I'd like to remove as many dependencies as possible from the sandbox code, >> so I'd like to explore the possibility of using the standard library marshal >> module. >> >> The first reason above is about crashing marshal with malicious input. To >> my thinking, we are in control of what data is marshaled, so we don't have >> to worry about malicious input. The untrusted Python code running in the >> sandbox doesn't have a way of sending marshaled data, so we don't have to >> worry that it will be used to attack the marshal module. The stdout of the >> untrusted Python code will become a string that is marshaled, but that >> doesn't provide a way for the untrusted code to attack the marshal module. >> Or have I missed something? >> >> The second reason I can't address, is this still a problem? What bad >> effects will we see if it is? >> >> --Ned. >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> http://mail.python.org/mailman/listinfo/pypy-dev From arigo at tunes.org Wed Dec 28 17:49:06 2011 From: arigo at tunes.org (Armin Rigo) Date: Wed, 28 Dec 2011 17:49:06 +0100 Subject: [pypy-dev] Use of marshal in the sandbox: is stdlib marshal OK? In-Reply-To: <4EFB28F4.9010500@nedbatchelder.com> References: <4EFA7F2D.2000804@nedbatchelder.com> <4EFB28F4.9010500@nedbatchelder.com> Message-ID: Hi Ned, On Wed, Dec 28, 2011 at 15:34, Ned Batchelder wrote: > The problem with "bundling pypy's marshal.py" is that it pulls in a lot of > infrastructure modules, which bulks up the calling process. Unsure what you mean. It seems to me that lib_pypy/marshal.py just imports lib_pypy/_marshal.py, which itself doesn't import any non-standard Python module. A bient?t, Armin. From ned at nedbatchelder.com Thu Dec 29 13:36:40 2011 From: ned at nedbatchelder.com (Ned Batchelder) Date: Thu, 29 Dec 2011 07:36:40 -0500 Subject: [pypy-dev] Use of marshal in the sandbox: is stdlib marshal OK? In-Reply-To: References: <4EFA7F2D.2000804@nedbatchelder.com> <4EFB28F4.9010500@nedbatchelder.com> Message-ID: <4EFC5ED8.9040208@nedbatchelder.com> On 12/28/2011 11:49 AM, Armin Rigo wrote: > Hi Ned, > > On Wed, Dec 28, 2011 at 15:34, Ned Batchelder wrote: >> The problem with "bundling pypy's marshal.py" is that it pulls in a lot of >> infrastructure modules, which bulks up the calling process. > Unsure what you mean. It seems to me that lib_pypy/marshal.py just > imports lib_pypy/_marshal.py, which itself doesn't import any > non-standard Python module. True, but we use py to import it, why is that? Perhaps I'm barking up the wrong tree trying to reduce the size of the pypy source tree I need alongside my sandbox. --Ned. > > A bient?t, > > Armin. > From fijall at gmail.com Thu Dec 29 17:00:04 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 29 Dec 2011 18:00:04 +0200 Subject: [pypy-dev] Use of marshal in the sandbox: is stdlib marshal OK? In-Reply-To: <4EFC5ED8.9040208@nedbatchelder.com> References: <4EFA7F2D.2000804@nedbatchelder.com> <4EFB28F4.9010500@nedbatchelder.com> <4EFC5ED8.9040208@nedbatchelder.com> Message-ID: On Thu, Dec 29, 2011 at 2:36 PM, Ned Batchelder wrote: > On 12/28/2011 11:49 AM, Armin Rigo wrote: >> >> Hi Ned, >> >> On Wed, Dec 28, 2011 at 15:34, Ned Batchelder >> ?wrote: >>> >>> The problem with "bundling pypy's marshal.py" is that it pulls in a lot >>> of >>> infrastructure modules, which bulks up the calling process. >> >> Unsure what you mean. ?It seems to me that lib_pypy/marshal.py just >> imports lib_pypy/_marshal.py, which itself doesn't import any >> non-standard Python module. > > True, but we use py to import it, why is that? ?Perhaps I'm barking up the > wrong tree trying to reduce the size of the pypy source tree I need > alongside my sandbox. > No, it's actually good to keep sandbox relatively separate from the pypy tree. From arigo at tunes.org Thu Dec 29 21:53:20 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 29 Dec 2011 21:53:20 +0100 Subject: [pypy-dev] Use of marshal in the sandbox: is stdlib marshal OK? In-Reply-To: References: <4EFA7F2D.2000804@nedbatchelder.com> <4EFB28F4.9010500@nedbatchelder.com> <4EFC5ED8.9040208@nedbatchelder.com> Message-ID: Hi, On Thu, Dec 29, 2011 at 17:00, Maciej Fijalkowski wrote: >> True, but we use py to import it, why is that? ?Perhaps I'm barking up the >> wrong tree trying to reduce the size of the pypy source tree I need >> alongside my sandbox. > > No, it's actually good to keep sandbox relatively separate from the pypy tree. Yes, indeed. We should do that, maybe moving sandbox-the-external-process into a separate repository. This would give people more confidence hacking the sandbox code, knowing that it's all regular Python code. A bient?t, Armin. From romain.py at gmail.com Fri Dec 30 16:36:24 2011 From: romain.py at gmail.com (Romain Guillebert) Date: Fri, 30 Dec 2011 16:36:24 +0100 Subject: [pypy-dev] Leysin Winter Sprint In-Reply-To: References: Message-ID: Hi Armin I'm trying to see how I can get there, I looked at the trains and it seems that going to Aigle first is the only way to go to Leysin, is that right ? Thanks Romain On Tue, Dec 27, 2011 at 6:05 PM, Armin Rigo wrote: > ===================================================================== > PyPy Leysin Winter Sprint (15-22nd January 2012) > ===================================================================== > > The next PyPy sprint will be in Leysin, Switzerland, for the > eighth time. This is a fully public sprint: newcomers and topics > other than those proposed below are welcome. > > ------------------------------ > Goals and topics of the sprint > ------------------------------ > > * Py3k: work towards supporting Python 3 in PyPy > > * NumPyPy: work towards supporting the numpy module in PyPy > > * JIT backends: integrate tests for ARM; look at the PowerPC 64; > maybe try again to write an LLVM- or GCC-based one > > * STM and STM-related topics; or the Concurrent Mark-n-Sweep GC > > * And as usual, the main side goal is to have fun in winter sports :-) > We can take a day off for ski. > > ----------- > Exact times > ----------- > > The work days should be 15-21 January 2011 (Sunday-Saturday). The > official plans are for people to arrive on the 14th or the 15th, and to > leave on the 22nd. > > ----------------------- > Location & Accomodation > ----------------------- > > Leysin, Switzerland, "same place as before". Let me refresh your > memory: both the sprint venue and the lodging will be in a very spacious > pair of chalets built specifically for bed & breakfast: > http://www.ermina.ch/. The place has a good ADSL Internet connexion > with wireless installed. You can of course arrange your own > lodging anywhere (as long as you are in Leysin, you cannot be more than a > 15 minutes walk away from the sprint venue), but I definitely recommend > lodging there too -- you won't find a better view anywhere else (though you > probably won't get much worse ones easily, either :-) > > Please *confirm* that you are coming so that we can adjust the reservations > as appropriate. The rate so far has been around 60 CHF a night all > included > in 2-person rooms, with breakfast. There are larger rooms too (less > expensive) and maybe the possibility to get a single room if you really > want > to. > > Please register by Mercurial:: > > https://bitbucket.org/pypy/extradoc/ > > https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2012 > > or on the pypy-dev mailing list if you do not yet have check-in rights: > > http://mail.python.org/mailman/listinfo/pypy-dev > > You need a Swiss-to-(insert country here) power adapter. There will be > some Swiss-to-EU adapters around -- bring a EU-format power strip if you > have one. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Fri Dec 30 17:45:53 2011 From: arigo at tunes.org (Armin Rigo) Date: Fri, 30 Dec 2011 17:45:53 +0100 Subject: [pypy-dev] Leysin Winter Sprint In-Reply-To: References: Message-ID: Hi Romain, On Fri, Dec 30, 2011 at 16:36, Romain Guillebert wrote: > I'm trying to see how I can get there, I looked at the trains and it seems > that going to Aigle first is the only way to go to Leysin, is that right ? Yes, Aigle is the downhill station of the Aigle-Leysin "small train" line. A bient?t, Armin. From coolbutuseless at gmail.com Sat Dec 31 10:24:45 2011 From: coolbutuseless at gmail.com (mike c) Date: Sat, 31 Dec 2011 19:24:45 +1000 Subject: [pypy-dev] micronumpy var & std patch Message-ID: Hi All, Find attached a patch (to default branch) to add var and std to micronumpy. Two simple tests were added, and all the micronumpy module tests still pass. Feedback is welcome. Not sure if there's a better way of nesting the method calls in descr_var() - currently I have to do the calculation a bit at a time and after each calculation assert that what I get back is actually a BaseArray. Not sure if this should go to the mailing list or to bugs.pypy.org. Hope here is ok. Mike. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpypy_var_std.patch Type: application/octet-stream Size: 3213 bytes Desc: not available URL: From fijall at gmail.com Sat Dec 31 11:09:23 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 31 Dec 2011 12:09:23 +0200 Subject: [pypy-dev] micronumpy var & std patch In-Reply-To: References: Message-ID: On Sat, Dec 31, 2011 at 11:24 AM, mike c wrote: > Hi All, > > Find attached a patch (to default branch) to add var and std to micronumpy. > ?Two simple tests were added, and all the micronumpy module tests still > pass. > > Feedback is welcome. Not sure if there's a better way of nesting the method > calls in descr_var() - currently I have to do the calculation a bit at a > time and after each calculation assert that what I get back is actually a > BaseArray. > > Not sure if this should go to the mailing list or to bugs.pypy.org. ?Hope > here is ok. > > Mike. > Hey Thanks for the patch! Will review it as soon as I have some time Cheers, fijal From laurie at tratt.net Sat Dec 31 11:03:04 2011 From: laurie at tratt.net (Laurence Tratt) Date: Sat, 31 Dec 2011 10:03:04 +0000 Subject: [pypy-dev] How to turn a crawling caterpillar of a VM into a graceful butterfly Message-ID: <20111231100304.GA22828@phase.tratt.net> Hi all, As many of you know, over the past few months I've been creating an RPython VM for the Converge language . This is now mostly complete at a basic level - enough to run pretty much all of my Converge programs at least on Linux and OpenBSD. [Major remaining issues are: no floating point numbers; 32 bit support is not finished; may not compile on OS X.] First, the good news. The RPython VM (~3 months effort) is currently close to 3 times faster than the old C-based VM (~18 months effort). I think the Converge VM is the first medium-scale VM to be created by someone outside the core PyPy group, so those numbers are a testament to the power of the RPython approach. I'd like to thank (alphabetically) Carl Friedrich Bolz, Maciej Fijalkowski, and Armin Rigo who've offered help, encouragement and (in Armin's case) big changes to RPython to help the Converge VM. I wouldn't have got this far without their help, or of others on the PyPy IRC channel. Now, the bad news: the RPython VM should be a lot faster than it currently is as the old VM is (to say the least) not very good. In large part this is because I simply don't know how best to optimise an RPython VM, particularly the JIT (in fact, at the moment the JITted VM seems to be sometimes slower than the non-JIT VM: I don't have an explanation for why). I'm therefore soliciting advice / code from those more knowledgeable than myself on how to optimise an RPython VM. Note that I'm intentionally being more general than the Converge VM: I hope that some of the ideas generated will prove useful to the many VMs that I hope will come to be implemented in RPython. Some of my ignorance is simply that many parts of RPython have little or no documentation: by playing the part of a clueless outsider (it comes naturally to me!), I hope I may help pinpoint areas where further documentation is most needed. If you want to test out the VM it's here: https://github.com/ltratt/converge Building should mostly be: $ export PYPY_SRC= $ cd converge $ ./configure $ make That will build a JITted VM. If you want a lower level of optimisation, specify it to configure e.g. "./configure --opt=3". Please note that, for the time being, on Linux the JIT won't work with the default GC root finder, so you'll need to manually specify --gcrootfinder=shadowstack in vm/Makefile. Apart from that, building on 64 bit Unix should hopefully be reasonably simple. A simple performance benchmark is along the lines of "make clean ; cd vm ; make ; cd .. ; time make regress" which builds the compiler, various examples, and the handful of tests that Converge comes with. [Please note, Converge wasn't built with a TDD philosophy; while I welcome contributions of tests, I am unable to make any major efforts in that regard myself in the short-term. I know this sits uneasily with many in the PyPy community, but I hope you are able to overlook this difference in development philosophy.] Here are some examples of questions I'd love to know answers to: * Why is the JITted VM that's built sometimes 2x slower than --opt=3, but other times a few percent faster *on the same benchmark*?! * What are virtualrefs? * What is a more precise semantics of elidable? * What is 'specialize'? * Is it worth manually inlining functions in the main VM loop? * Can I avoid the (many) calls to rffi.charp[size]2str, given that they're mostly taken from the never-free'd mod_bc? Would this work well on some GCs but not others? No doubt there are many other things I would do well to know, but plainly do not - please educate me! If you have any questions or comments about Converge or the VM, please don't hesitate to ask me - and thank you in advance for your help. Yours, Laurie -- Personal http://tratt.net/laurie/ The Converge programming language http://convergepl.org/ https://github.com/ltratt http://twitter.com/laurencetratt From fijall at gmail.com Sat Dec 31 11:29:40 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 31 Dec 2011 12:29:40 +0200 Subject: [pypy-dev] How to turn a crawling caterpillar of a VM into a graceful butterfly In-Reply-To: <20111231100304.GA22828@phase.tratt.net> References: <20111231100304.GA22828@phase.tratt.net> Message-ID: On Sat, Dec 31, 2011 at 12:03 PM, Laurence Tratt wrote: > Hi all, > > As many of you know, over the past few months I've been creating an RPython > VM for the Converge language . This is now mostly > complete at a basic level - enough to run pretty much all of my Converge > programs at least on Linux and OpenBSD. [Major remaining issues are: no > floating point numbers; 32 bit support is not finished; may not compile on OS > X.] > > First, the good news. The RPython VM (~3 months effort) is currently close to > 3 times faster than the old C-based VM (~18 months effort). I think the > Converge VM is the first medium-scale VM to be created by someone outside the > core PyPy group, so those numbers are a testament to the power of the RPython > approach. I'd like to thank (alphabetically) Carl Friedrich Bolz, Maciej > Fijalkowski, and Armin Rigo who've offered help, encouragement and (in > Armin's case) big changes to RPython to help the Converge VM. I wouldn't have > got this far without their help, or of others on the PyPy IRC channel. > > Now, the bad news: the RPython VM should be a lot faster than it currently is > as the old VM is (to say the least) not very good. In large part this is > because I simply don't know how best to optimise an RPython VM, particularly > the JIT (in fact, at the moment the JITted VM seems to be sometimes slower > than the non-JIT VM: I don't have an explanation for why). I'm therefore > soliciting advice / code from those more knowledgeable than myself on how to > optimise an RPython VM. Note that I'm intentionally being more general than > the Converge VM: I hope that some of the ideas generated will prove useful to > the many VMs that I hope will come to be implemented in RPython. Some of my > ignorance is simply that many parts of RPython have little or no > documentation: by playing the part of a clueless outsider (it comes naturally > to me!), I hope I may help pinpoint areas where further documentation is most > needed. > > If you want to test out the VM it's here: > > ?https://github.com/ltratt/converge > > Building should mostly be: > > ?$ export PYPY_SRC= > ?$ cd converge > ?$ ./configure > ?$ make > > That will build a JITted VM. If you want a lower level of optimisation, > specify it to configure e.g. "./configure --opt=3". Please note that, for the > time being, on Linux the JIT won't work with the default GC root finder, so > you'll need to manually specify --gcrootfinder=shadowstack in vm/Makefile. > Apart from that, building on 64 bit Unix should hopefully be reasonably > simple. > > A simple performance benchmark is along the lines of "make clean ; cd vm ; > make ; cd .. ; time make regress" which builds the compiler, various > examples, and the handful of tests that Converge comes with. [Please note, > Converge wasn't built with a TDD philosophy; while I welcome contributions of > tests, I am unable to make any major efforts in that regard myself in the > short-term. I know this sits uneasily with many in the PyPy community, but I > hope you are able to overlook this difference in development philosophy.] > > Here are some examples of questions I'd love to know answers to: > > ?* Why is the JITted VM that's built sometimes 2x slower than --opt=3, > ? ?but other times a few percent faster *on the same benchmark*?! > ?* What are virtualrefs? > ?* What is a more precise semantics of elidable? > ?* What is 'specialize'? > ?* Is it worth manually inlining functions in the main VM loop? > ?* Can I avoid the (many) calls to rffi.charp[size]2str, given that > ? ?they're mostly taken from the never-free'd mod_bc? Would this work > ? ?well on some GCs but not others? > > No doubt there are many other things I would do well to know, but plainly do > not - please educate me! > > If you have any questions or comments about Converge or the VM, please don't > hesitate to ask me - and thank you in advance for your help. > > Yours, > > > Laurie > -- > Personal ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? http://tratt.net/laurie/ > The Converge programming language ? ? ? ? ? ? ? ? ? ? ?http://convergepl.org/ > ? https://github.com/ltratt ? ? ? ? ? ? ?http://twitter.com/laurencetratt > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > http://mail.python.org/mailman/listinfo/pypy-dev Hi Laurence. Overall great work, but I have to point out one thing - if you want us to look into benchmarks, you have to provide a precise benchmark and a precise way to run it (few examples would be awesome), otherwise for people who are not aware of the language this might pose a challenge. PS. Glad to be of any help Cheers, fijal From laurie at tratt.net Sat Dec 31 16:26:26 2011 From: laurie at tratt.net (Laurence Tratt) Date: Sat, 31 Dec 2011 15:26:26 +0000 Subject: [pypy-dev] How to turn a crawling caterpillar of a VM into a graceful butterfly In-Reply-To: References: <20111231100304.GA22828@phase.tratt.net> Message-ID: <20111231152626.GA32360@phase.tratt.net> On Sat, Dec 31, 2011 at 12:29:40PM +0200, Maciej Fijalkowski wrote: Hi Maciej, > Overall great work, but I have to point out one thing - if you want us to > look into benchmarks, you have to provide a precise benchmark and a precise > way to run it (few examples would be awesome), otherwise for people who are > not aware of the language this might pose a challenge. At the moment, I'm not even sure I know what representative benchmarks might be - it's certainly something I'd like advice on! I did mention one simple benchmark in my message, which is "make regress" - it compiles the compiler, standard library, examples, and tests. This exercises most (not all, but most) of the infrastructure, so it's a pretty decent test (though it may not be entirely JIT friendly, as it's lots of small-ish tests; whether the JIT will warm up sufficiently is an open question in my mind). To run this test, I'd suggest something like: $ make clean ; cd vm ; make ; cd .. ; time make regress The reason for that order is that it 1) cleans everything (in order that it'll be built later) 2) compiles the VM on its own (we don't want to include that in the timings!) 3) executes "make regress" and times it (without including the VM build in the figures). If you're interested in creating new micro-benchmarks, the language manual is hopefully instructive: http://convergepl.org/documentation/1.2/quick_intro/ Let's say, for arguments sake, that you thought testing integer addition in a loop is a good idea. This program will do the trick: func main(): i := 0 while i < 100000: i += 1 You can then compile it: $ convergec -m f.cv and then execute it: $ time converge f That way, you know you're not including the time needed to compile the program. Yours, Laurie -- Personal http://tratt.net/laurie/ The Converge programming language http://convergepl.org/ https://github.com/ltratt http://twitter.com/laurencetratt From mmueller at python-academy.de Sat Dec 31 16:27:59 2011 From: mmueller at python-academy.de (=?ISO-8859-15?Q?Mike_M=FCller?=) Date: Sat, 31 Dec 2011 16:27:59 +0100 Subject: [pypy-dev] PyPy/NumPyPy sprint(s) in Leipzig, Germany? Message-ID: <4EFF29FF.7040009@python-academy.de> Hi, I am just wondering if anybody is interested in sprinting on PyPy and in particular NumPyPy in Leipzig sometime in 2012. I can offer working space for up to 12 people with Wi-Fi as well as some basic catering (hot and cold drinks, snacks, pizza etc.) for a few days (up to a week). Accommodation in Leipzig is very reasonably priced. For example, there is a decent hotel very close to the venue for 35 Euros/night (single) or 50 Euros/night (double) including breakfast. Being a logistics center, Leipzig is easy to travel to by car, train or airplane including budget airlines. I would also act as co-sponsor (with the resources stated above) for an application for sprint funds (http://pythonsprints.com/cfa/ or other sources) that could cover (parts) of the traveling and accommodation expenses. Let me know what you think about it. I am open to ideas. Mike From arigo at tunes.org Sat Dec 31 17:45:35 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 31 Dec 2011 17:45:35 +0100 Subject: [pypy-dev] How to turn a crawling caterpillar of a VM into a graceful butterfly In-Reply-To: <20111231152626.GA32360@phase.tratt.net> References: <20111231100304.GA22828@phase.tratt.net> <20111231152626.GA32360@phase.tratt.net> Message-ID: Hi Laurence, On Sat, Dec 31, 2011 at 16:26, Laurence Tratt wrote: > ?func main(): > ? ?i := 0 > ? ?while i < 100000: > ? ? ?i += 1 A quick update: on this program, with 100 times the number of iterations, "converge-opt3" runs in 2.6 seconds on my laptop, and "converge-jit" runs in less than 0.7 seconds. That's already a 4x speed-up :-) I think that you simply underestimated the warm-up times. A bient?t, Armin. From laurie at tratt.net Sat Dec 31 17:58:55 2011 From: laurie at tratt.net (Laurence Tratt) Date: Sat, 31 Dec 2011 16:58:55 +0000 Subject: [pypy-dev] How to turn a crawling caterpillar of a VM into a graceful butterfly In-Reply-To: References: <20111231100304.GA22828@phase.tratt.net> <20111231152626.GA32360@phase.tratt.net> Message-ID: <20111231165855.GB32360@phase.tratt.net> On Sat, Dec 31, 2011 at 05:45:35PM +0100, Armin Rigo wrote: Hi Armin, >> ?func main(): >> ? ?i := 0 >> ? ?while i < 100000: >> ? ? ?i += 1 > A quick update: on this program, with 100 times the number of iterations, > "converge-opt3" runs in 2.6 seconds on my laptop, and "converge-jit" runs > in less than 0.7 seconds. That's already a 4x speed-up :-) I think that > you simply underestimated the warm-up times. In fairness, I did know that that program benefits from the JIT at least somewhat :) I was wondering if there are other micro-benchmarks that the PyPy folk found paricularly illuminating / surprising when optimising PyPy. There's also something else that's weird. Try "time make regress" with --opt=3 and --opt=jit. The latter is often twice as slow as the former. I have no useful intuitions as why at the moment. Yours, Laurie -- Personal http://tratt.net/laurie/ The Converge programming language http://convergepl.org/ https://github.com/ltratt http://twitter.com/laurencetratt