From arigo at tunes.org Thu Dec 1 18:47:35 2005 From: arigo at tunes.org (Armin Rigo) Date: Thu, 1 Dec 2005 18:47:35 +0100 Subject: [pypy-dev] Full Python Annotation In-Reply-To: References: Message-ID: <20051201174735.GA28244@code1.codespeak.net> Hi Adam, On Fri, Nov 25, 2005 at 10:13:25AM -0700, Adam Olsen wrote: > Notes for description of value annotator I know that you expect some answer, but there is really not much that we can discuss about so far. The notes you present are interesting but they don't really touch the problems particular to Python. This is similar to existing approaches (Samuele posted some links on IRC), with aggressive specialization to get very precise results like integer ranges. I agree that, done right, this range analysis can prove that many usages of integers in a program cannot actually overflow; I've seen papers that got good results because user programs typically looked like: i = 0 while i < len(container): ... i += 1 The addition here cannot overflow, because i < len(container) <= sys.maxint so that i+1 <= sys.maxint. The problem, though, is that this kind of consideration is not what we are interested in for PyPy. A more interesting problem to consider would be the unusual Python object model, where the position in the class hierarchy of instance attributes is not declared. However, we already solved this one in our annotator. If you target "full Python" (whatever is really meant by that), there are many problems that we have mentioned to you already but that you don't seem to start worrying about (and no, "exec" alone is not a problem). The more fundamental problems lie in that, on the one hand, real-world programs build some of their own structure indirectly, and on the other hand you cannot be sure that nobody is playing tricks. There are just too many tricks in Python to ensure that any type approximation can be completely safe. You would have to resort to expected-case analysis and run-time guards or some kind of callbacks invalidating code when things go out of hand at run-time. None of that is impossible, but none of that is straightforward either -- more precisely, I think that this would easily swallow many "man-years" of work. While *still* not being what PyPy is about, discussing this kind of Python-oriented issues here wouldn't be completely off-topic :-) Instead of doing this, in PyPy we used the practical approach of RPython to develop PyPy itself in, and then we are about to work on the JIT for user code - "full Python" user code, as in "I type python xyz.py and I get the expected result". A bientot, Armin From arigo at tunes.org Fri Dec 2 00:39:01 2005 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Dec 2005 00:39:01 +0100 Subject: [pypy-dev] Branch merged Message-ID: <20051201233901.GA3312@code1.codespeak.net> Hi, The somepbc-refactoring branch has been merged. Thanks to Michael for participation! What is still broken: * pypy/bin/translator.py -- the Translator class is going away, the functionality we need should probably be moved into this bin/translator.py for now. But ideally this should be replaced by an interface based on translator/goal/driver.py. * pdbplus, the pdb extension you get at the end of translate_pypy.py, has probably commands that need fixing. * translate_pypy --backend=llvm crashes apparently when building non-standalone targets; standalone targets appear to work fine (didn't try the whole of PyPy, however). All llvm tests work too, so it's hard to understand exactly what the problem is. * Christian: we did not completely port your r19917 because it needs some adaptations to the new world. The Translator class is being replaced by a much thinner TranslationContext. Most importantly, your changes to translator/c/pyobj.py haven't been merged -- they will also need adaptation to work on graphs instead of functions, as we did for the rest of pyobj.py. It was also difficult to know exactly what was needed because of the absence of tests. However, there is a tag of the trunk before we merged, so that a working pypy with your changes is still in http://codespeak.net/svn/pypy/tag/dist-ext-someobject . That's it ! A bientot, Armin & Samuele (& Michael, sadly not on-line when we merged -- or more probably happily so :-) From rxe at ukshells.co.uk Fri Dec 2 07:19:40 2005 From: rxe at ukshells.co.uk (Richard Emslie) Date: Fri, 2 Dec 2005 06:19:40 +0000 (GMT) Subject: [pypy-dev] Branch merged In-Reply-To: <20051201233901.GA3312@code1.codespeak.net> References: <20051201233901.GA3312@code1.codespeak.net> Message-ID: Hi Armin & Samuele! On Fri, 2 Dec 2005, Armin Rigo wrote: > > The somepbc-refactoring branch has been merged. Thanks to Michael for > participation! Cool! > > What is still broken: > > * translate_pypy --backend=llvm crashes apparently when building > non-standalone targets; standalone targets appear to work fine (didn't > try the whole of PyPy, however). All llvm tests work too, so it's > hard to understand exactly what the problem is. > Actually, I dont think any have ever worked - so no problem! :-) The interface for extensions is minimal to say the least. There is a todo on the llvm backend to remove the (somewhat legacy) pyrex interface wrapper and support a richer set of objects that can be passed to and from CPython. Thanks for doing a fix up of the llvm backend. Cheers, Richard From sanxiyn at gmail.com Fri Dec 2 17:40:46 2005 From: sanxiyn at gmail.com (Sanghyeon Seo) Date: Sat, 3 Dec 2005 01:40:46 +0900 Subject: [pypy-dev] Branch merged In-Reply-To: <20051201233901.GA3312@code1.codespeak.net> References: <20051201233901.GA3312@code1.codespeak.net> Message-ID: <5b0248170512020840k70c07ecfn@mail.gmail.com> 2005/12/2, Armin Rigo : > Hi, > > The somepbc-refactoring branch has been merged. Thanks to Michael for > participation! Great work! I think that it may be interesting to re-run the import analysis of PyPy. Michael? Seo Sanghyeon From arigo at tunes.org Fri Dec 2 18:28:49 2005 From: arigo at tunes.org (Armin Rigo) Date: Fri, 2 Dec 2005 18:28:49 +0100 Subject: [pypy-dev] Pascal backend In-Reply-To: <5b0248170511301040p4762a79bq@mail.gmail.com> References: <5b0248170511301040p4762a79bq@mail.gmail.com> Message-ID: <20051202172849.GA12932@code1.codespeak.net> Hi Seo! On Thu, Dec 01, 2005 at 03:40:58AM +0900, Sanghyeon Seo wrote: > I started Pascal backend for a while now. Have a look at: > http://sparcs.kaist.ac.kr/~tinuviel/pypy/ > > Resulting Pascal source can be compiled with FPC(Free Pascal Compiler), > available from http://www.freepascal.org/ > > What do you think? Also, if I understood correctly, translation part > is undergoing much change in the branch. What do I need to change? There is not too much to change. Mostly, back-ends must now work directly with graphs instead of manipulating Python function objects. I take it that you know that the current approach for back-ends is to use the output of the RTyper, not just the annotations? It's particularly useful for a back-end like Pascal. You would get flow graphs that contain "low-level" operations. For integer manipulation it is not such a big difference, but for any more complicated kind of objects it is essential because it allows you to care only about the "lltypes", i.e. structures (Pascal records), arrays and pointers. A bientot, Armin From mwh at python.net Sat Dec 3 13:27:31 2005 From: mwh at python.net (Michael Hudson) Date: Sat, 03 Dec 2005 12:27:31 +0000 Subject: [pypy-dev] This Week in PyPy 5 Message-ID: <2md5kebc24.fsf@starship.python.net> Introduction ============ This is the fifth of what will hopefully be many summaries of what's been going on in the world of PyPy in the last week. I'd still like to remind people that when something worth summarizing happens to recommend if for "This Week in PyPy" as mentioned on: http://codespeak.net/pypy/dist/pypy/doc/weekly/ where you can also find old summaries. I note in passing that the idea of keeping track of IRC conversations in the weekly summary has pretty much fizzled. Oh well. There were about 230 commits to the pypy section of codespeak's repository in the last week (a busy one, it seems :-). SomePBC-refactoring =================== We merged the branch at last! Finishing the branch off and getting translate_pypy running again seemed to mostly involve fighting with memoized functions and methods, and the "strange details" hinted at in the last "This Week in PyPy" were not so bad -- indeed once we got to the point of rtyping finishing, the backend optimizations, source generation, compilation and resulting binary all worked first time (there must be something to be said for this Test Driven Development stuff :). If you recall from the second This Week in PyPy the thing that motivated us to start the branch was wanting to support multiple independent object spaces in the translated binary. After three weeks of refactoring we hoped we'd made this possible... and so it proved, though a couple of small tweaks were needed to the PyPy source. The resulting binary is quite a lot (40%) bigger but only a little (10%) slower. CCC papers ========== As mentioned last week, two PyPy talks have been accepted for the Chaos Communication Congress. The CCC asks that speakers provide papers to accompany their talks (they make a proceedings book) so that's what we've done, and the results are two quite nice pieces of propaganda for the project: http://codespeak.net/pypy/extradoc/talk/22c3/agility.pdf http://codespeak.net/pypy/extradoc/talk/22c3/techpaper.pdf It's still possible to attend the conference in Berlin, from December 27th to the 30th: http://events.ccc.de/congress/2005 A number of PyPy people will be around and innocently mixing with people from other communities and generally be available for discussing all things PyPy and the future. Background EU-related work ========================== Less visible but still requiring work, organisations funding and organizing the EU PyPy project are currently preparing a lot of paperwork and reports. Most of the reports are mostly done by now but the next G?teborg sprint will start with two (insider only) days of dotting the 'i's and crossing the 't's. Let's all hope that everything goes well at our first major EU review at the end of January. Meanwhile, Holger was invited to give a talk about PyPy's technical organisation at a workshop given by the german EU office on the 5th of December. Also, Bea, Alastair and Holger will talk about PyPy at an EU workshop on the 8th of December in Bruxelles. Hopefully, this will enable us to find more opportunities to get PyPy recognized as an interesting "live" project in the EU's corner of the world. Where did PyPy-sync go? ======================= What's a pypy-sync meeting? Apparently:: It's an XP-style meeting that serves to synchronize development work and let everybody know who is working on what. It also serves as a decision board of the PyPy active developers. If discussions last too long and decisions cannot be reached they are delegated to a sub-group or get postponed. pypy-sync meetings usually happen on thursdays at 1pm CET on the #pypy-sync IRC channel on freenode, with an agenda prepared beforehand and minutes posted to pypy-dev after the meeting. Except that the last couple haven't really happened this way -- no agenda and only a few people have turned up and mostly just the people who are in #pypy all week anyway. So after the G?teborg sprint next week we're going to try harder to prepare and get developers to attend pypy-sync meetings again. This is especially important as we head towards more varied and less intrinsically related challenges such as a JIT compiler, integration of logic programming, GC, higher level backends and much more. -- Check out the comments in this source file that start with: # Oh, lord help us. -- Mark Hammond gets to play with the Outlook object model From tismer at stackless.com Sat Dec 3 15:33:54 2005 From: tismer at stackless.com (Christian Tismer) Date: Sat, 03 Dec 2005 15:33:54 +0100 Subject: [pypy-dev] RPython, PyPy, Stackless and all the rest (was: Branch merged) In-Reply-To: <20051201233901.GA3312@code1.codespeak.net> References: <20051201233901.GA3312@code1.codespeak.net> Message-ID: <4391ACD2.1010406@stackless.com> Armin Rigo wrote: ... > * Christian: we did not completely port your r19917 because it needs > some adaptations to the new world. The Translator class is being > replaced by a much thinner TranslationContext. Most importantly, your > changes to translator/c/pyobj.py haven't been merged -- they will also > need adaptation to work on graphs instead of functions, as we did for > the rest of pyobj.py. It was also difficult to know exactly what was > needed because of the absence of tests. However, there is a tag of > the trunk before we merged, so that a working pypy with your changes > is still in http://codespeak.net/svn/pypy/tag/dist-ext-someobject . That's fine with me. I didn't consider my patches more than a quick hack to get things going, and the major purpose was to hack in what was needed, just to convince the EWT people that PyPy is the way to go for their future (which worked very well). This part of code generation (concerning extensions and how to handle certain external objects) was kind of under-developed all the time. After my initial tests, I have a better picture what's needed, and I would like to discuss a couple of ways to create a nice interface for building extension modules, and also new builtin modules! This is because after a while of thinking, I figured out that this is the best way to support hard-to-port things like Stackless Python ATM: Instead of again patching all the newly grown extension modules to support some half-hearted Stackless support, I want to build a Stackless-PyPy hybrid implementation, based upon CPython 2.4, but with certain modules removed from the source and replaced by PyPy generated source code. This source code will be generated from RPython implementations, which are then both used in PyPy and in Stackless. One example of these should be the itertools, another one the Stackless module itself (and of course the new deque objects, which I ported to RPy in the states,already). With this approach, I hope to make PyPy productive right now, before we have the advanced technologies, to simplify porting new features to Stackless, and to make more companies interested in using PyPy for their code. I have a sack full of small features which are needed for this. Nothing too hard, but enough for quite some work. I think it makes sense, because PyPy becomes productive right now, and we need that to have an argument for being used in production code. cheers -- chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From Ben.Young at risk.sungard.com Wed Dec 7 11:03:35 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Wed, 7 Dec 2005 10:03:35 +0000 Subject: [pypy-dev] Comments from an observer Message-ID: Dear PyPy'ers, First of all I would like to say that I think PyPy is an amazing project and that you have all done a really great job. Also the comments I have on the project are not aimed at any people in the project, more just at the general direction it appears to be going in. PyPy is on the edge of something great. A maintainable, powerful, flexible, fast interpreter is just what the python community needs. However just when it seems that PyPy can start to have some real significance in the Python world it seems like these benefits are being delayed for more research work which may take a long time. For instance a way of writing a rpython module that could be compiled to a Cpython extension or a PyPy extension would allow people to start using PyPy now, and at the same time make faster, powerful extensions for CPython while maintaining an upgrade path to PyPy. This would bring PyPy to the attention of a lot of people giving more testers/developers. Also, most people on #pypy seem to ask about using pypy to compile their simple python programs to c. Now, this doesn't seem like a great deal of work away (better error messages etc), but they are (politely) told that this is not what rpython is for. Now if rpython is not for this, why did you write PyPy in it? The same arguments could be applied to most programs (python is easier to read/maintain/write). I really can't see why something as useful as rpthon should remain an implementation detail, and again, exposing it would bring great exposure and benefits to the project. I don't want to come across like a moaner (and indeed, that's why I stop writing on #pypy as felt I couldn't be enough of a positive voice), and the only reason I'm writing this is because I think so much of the project and think it has so much potential. The last thing I want to see is for PyPy to become a great implemention with many powerful features, but then find that it had missed its time by not being "results driven" enough. The world doesn't need another powerful research/university language, it needs a great production language and with PyPy I think Python could be that language. Anyway, enough of my ranting. I'm sorry if I've offended anyone or completely missed the point. I'll go back to being a hopefull lurker again! Cheers, Ben From jacob at strakt.com Wed Dec 7 12:45:48 2005 From: jacob at strakt.com (Jacob =?iso-8859-1?q?Hall=E9n?=) Date: Wed, 7 Dec 2005 12:45:48 +0100 Subject: [pypy-dev] Comments from an observer In-Reply-To: References: Message-ID: <200512071245.49608.jacob@strakt.com> onsdagen den 7 december 2005 11.03 skrev Ben.Young at risk.sungard.com: > Dear PyPy'ers, > > First of all I would like to say that I think PyPy is an amazing project > and that you have all done a really great job. Also the comments I have on > the project are not aimed at any people in the project, more just at the > general direction it appears to be going in. > > PyPy is on the edge of something great. A maintainable, powerful, > flexible, fast interpreter is just what the python community needs. > However just when it seems that PyPy can start to have some real > significance in the Python world it seems like these benefits are being > delayed for more research work which may take a long time. > > For instance a way of writing a rpython module that could be compiled to a > Cpython extension or a PyPy extension would allow people to start using > PyPy now, and at the same time make faster, powerful extensions for > CPython while maintaining an upgrade path to PyPy. This would bring PyPy > to the attention of a lot of people giving more testers/developers. > > Also, most people on #pypy seem to ask about using pypy to compile their > simple python programs to c. Now, this doesn't seem like a great deal of > work away (better error messages etc), but they are (politely) told that > this is not what rpython is for. Now if rpython is not for this, why did > you write PyPy in it? The same arguments could be applied to most programs > (python is easier to read/maintain/write). I really can't see why > something as useful as rpthon should remain an implementation detail, and > again, exposing it would bring great exposure and benefits to the project. > > I don't want to come across like a moaner (and indeed, that's why I stop > writing on #pypy as felt I couldn't be enough of a positive voice), and > the only reason I'm writing this is because I think so much of the project > and think it has so much potential. The last thing I want to see is for > PyPy to become a great implemention with many powerful features, but then > find that it had missed its time by not being "results driven" enough. The > world doesn't need another powerful research/university language, it needs > a great production language and with PyPy I think Python could be that > language. > > Anyway, enough of my ranting. I'm sorry if I've offended anyone or > completely missed the point. I'll go back to being a hopefull lurker > again! Thanks for your input Ben, I think you are quite right in everything you say, and there are people among the Pypy developers who would be very interested in working on making RPython directly useable. However, we are to a fairly large extent deadline driven. The EU financing comes with a large set of promises for what we are going to do and a fairly strict timeline to go with it. Currently this timeline says that we are to work on core optimisations, stacklessness and JIT, with the work to be finished by May 2006. Some people are also to do support for aspect oriented programming and constraints satisfaction. After May, there are other things promised until the official end of the EU project in November 2006. You can see the EU financing as being a fully commercial customer driven project. The only difference is the the customer hardly ever changes his mind. This means that (almost) everyone currently working on the project is very busy and doesn't have time to delve into interesting paths. Fortunately, we are in contact with a party that is very interested in doing exactly what you propose to do, and may be ready to pay for getting it done. However, this would require people not currently doing Pypy development to do the work. I'm not at liberty to discuss this in detail. I would just like to mention it so that you can see that there may be a way forward. Even though the EU financing is a straight-jacket, we should remember that we would be nowhere near what we have today without it. Best regards Jacob Hall?n From Ben.Young at risk.sungard.com Wed Dec 7 12:51:50 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Wed, 7 Dec 2005 11:51:50 +0000 Subject: [pypy-dev] Comments from an observer In-Reply-To: <200512071245.49608.jacob@strakt.com> Message-ID: Jacob Hall?n wrote on 07/12/2005 11:45:48: > onsdagen den 7 december 2005 11.03 skrev Ben.Young at risk.sungard.com: > > Dear PyPy'ers, > > > > First of all I would like to say that I think PyPy is an amazing project > > and that you have all done a really great job. Also the comments I have on > > the project are not aimed at any people in the project, more just at the > > general direction it appears to be going in. > > > > PyPy is on the edge of something great. A maintainable, powerful, > > flexible, fast interpreter is just what the python community needs. > > However just when it seems that PyPy can start to have some real > > significance in the Python world it seems like these benefits are being > > delayed for more research work which may take a long time. > > > > For instance a way of writing a rpython module that could be compiled to a > > Cpython extension or a PyPy extension would allow people to start using > > PyPy now, and at the same time make faster, powerful extensions for > > CPython while maintaining an upgrade path to PyPy. This would bring PyPy > > to the attention of a lot of people giving more testers/developers. > > > > Also, most people on #pypy seem to ask about using pypy to compile their > > simple python programs to c. Now, this doesn't seem like a great deal of > > work away (better error messages etc), but they are (politely) told that > > this is not what rpython is for. Now if rpython is not for this, why did > > you write PyPy in it? The same arguments could be applied to most programs > > (python is easier to read/maintain/write). I really can't see why > > something as useful as rpthon should remain an implementation detail, and > > again, exposing it would bring great exposure and benefits to the project. > > > > I don't want to come across like a moaner (and indeed, that's why I stop > > writing on #pypy as felt I couldn't be enough of a positive voice), and > > the only reason I'm writing this is because I think so much of the project > > and think it has so much potential. The last thing I want to see is for > > PyPy to become a great implemention with many powerful features, but then > > find that it had missed its time by not being "results driven" enough. The > > world doesn't need another powerful research/university language, it needs > > a great production language and with PyPy I think Python could be that > > language. > > > > Anyway, enough of my ranting. I'm sorry if I've offended anyone or > > completely missed the point. I'll go back to being a hopefull lurker > > again! > > Thanks for your input Ben, > > I think you are quite right in everything you say, and there are people among > the Pypy developers who would be very interested in working on making RPython > directly useable. However, we are to a fairly large extent deadline driven. > > The EU financing comes with a large set of promises for what we are going to > do and a fairly strict timeline to go with it. Currently this timeline says > that we are to work on core optimisations, stacklessness and JIT, with the > work to be finished by May 2006. Some people are also to do support for > aspect oriented programming and constraints satisfaction. After May, there > are other things promised until the official end of the EU project in > November 2006. You can see the EU financing as being a fully commercial > customer driven project. The only difference is the the customer hardly ever > changes his mind. > > This means that (almost) everyone currently working on the project is very > busy and doesn't have time to delve into interesting paths. > > Fortunately, we are in contact with a party that is very interested in doing > exactly what you propose to do, and may be ready to pay for getting it done. > However, this would require people not currently doing Pypy development to do > the work. I'm not at liberty to discuss this in detail. I would just like to > mention it so that you can see that there may be a way forward. > > Even though the EU financing is a straight-jacket, we should remember that we > would be nowhere near what we have today without it. > > Best regards > > Jacob Hall?n > Hi Jacob, Thanks for the reply. I understand completely about the EU thing. Both a massive benefit and a minor curse. Just wanted to put my frustrations down in words! I guess it comes from wanting to contribute but having no time to do it at all. Cheers, Ben From hpk at trillke.net Wed Dec 7 17:28:14 2005 From: hpk at trillke.net (holger krekel) Date: Wed, 7 Dec 2005 17:28:14 +0100 Subject: [pypy-dev] Comments from an observer In-Reply-To: References: Message-ID: <20051207162814.GE10165@solar.trillke.net> Hey Ben, just one additional note: we did say sometimes that we will do our best to help someone working on such a tool ... it's not too far off and actually quite some work has been spend on improving and refining the translation process. It just needs someone with dedication and some time to think and experiment a bit, tackling some minor issues and discussing/promoting larger issues. Moreover, the project is evolving in more directions than are covered by the EU funding and the EU only partially funds development anyway. The current group cannot follow all interesting paths at the same time - although it sometimes may appear so :) cheers, holger On Wed, Dec 07, 2005 at 10:03 +0000, Ben.Young at risk.sungard.com wrote: > > First of all I would like to say that I think PyPy is an amazing project > and that you have all done a really great job. Also the comments I have on > the project are not aimed at any people in the project, more just at the > general direction it appears to be going in. > > PyPy is on the edge of something great. A maintainable, powerful, > flexible, fast interpreter is just what the python community needs. > However just when it seems that PyPy can start to have some real > significance in the Python world it seems like these benefits are being > delayed for more research work which may take a long time. > > For instance a way of writing a rpython module that could be compiled to a > Cpython extension or a PyPy extension would allow people to start using > PyPy now, and at the same time make faster, powerful extensions for > CPython while maintaining an upgrade path to PyPy. This would bring PyPy > to the attention of a lot of people giving more testers/developers. > > Also, most people on #pypy seem to ask about using pypy to compile their > simple python programs to c. Now, this doesn't seem like a great deal of > work away (better error messages etc), but they are (politely) told that > this is not what rpython is for. Now if rpython is not for this, why did > you write PyPy in it? The same arguments could be applied to most programs > (python is easier to read/maintain/write). I really can't see why > something as useful as rpthon should remain an implementation detail, and > again, exposing it would bring great exposure and benefits to the project. > > I don't want to come across like a moaner (and indeed, that's why I stop > writing on #pypy as felt I couldn't be enough of a positive voice), and > the only reason I'm writing this is because I think so much of the project > and think it has so much potential. The last thing I want to see is for > PyPy to become a great implemention with many powerful features, but then > find that it had missed its time by not being "results driven" enough. The > world doesn't need another powerful research/university language, it needs > a great production language and with PyPy I think Python could be that > language. > > Anyway, enough of my ranting. I'm sorry if I've offended anyone or > completely missed the point. I'll go back to being a hopefull lurker > again! > > Cheers, > Ben > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From Ben.Young at risk.sungard.com Wed Dec 7 17:46:12 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Wed, 7 Dec 2005 16:46:12 +0000 Subject: [pypy-dev] Comments from an observer In-Reply-To: <20051207162814.GE10165@solar.trillke.net> Message-ID: Hi Holger, I understand. I probably came across more pessimistic than I actually am. It's just very easy to get excited by a project like this, and see the endless possibilities (and not the endless hurdles)! Cheers, Ben hpk at trillke.net (holger krekel) wrote on 07/12/2005 16:28:14: > Hey Ben, > > just one additional note: we did say sometimes that we will do > our best to help someone working on such a tool ... it's not > too far off and actually quite some work has been spend on > improving and refining the translation process. It just needs > someone with dedication and some time to think and experiment > a bit, tackling some minor issues and discussing/promoting > larger issues. > > Moreover, the project is evolving in more directions > than are covered by the EU funding and the EU > only partially funds development anyway. The current > group cannot follow all interesting paths at the same > time - although it sometimes may appear so :) > > cheers, > > holger > > > On Wed, Dec 07, 2005 at 10:03 +0000, Ben.Young at risk.sungard.com wrote: > > > > First of all I would like to say that I think PyPy is an amazing project > > and that you have all done a really great job. Also the comments I have on > > the project are not aimed at any people in the project, more just at the > > general direction it appears to be going in. > > > > PyPy is on the edge of something great. A maintainable, powerful, > > flexible, fast interpreter is just what the python community needs. > > However just when it seems that PyPy can start to have some real > > significance in the Python world it seems like these benefits are being > > delayed for more research work which may take a long time. > > > > For instance a way of writing a rpython module that could be compiled to a > > Cpython extension or a PyPy extension would allow people to start using > > PyPy now, and at the same time make faster, powerful extensions for > > CPython while maintaining an upgrade path to PyPy. This would bring PyPy > > to the attention of a lot of people giving more testers/developers. > > > > Also, most people on #pypy seem to ask about using pypy to compile their > > simple python programs to c. Now, this doesn't seem like a great deal of > > work away (better error messages etc), but they are (politely) told that > > this is not what rpython is for. Now if rpython is not for this, why did > > you write PyPy in it? The same arguments could be applied to most programs > > (python is easier to read/maintain/write). I really can't see why > > something as useful as rpthon should remain an implementation detail, and > > again, exposing it would bring great exposure and benefits to the project. > > > > I don't want to come across like a moaner (and indeed, that's why I stop > > writing on #pypy as felt I couldn't be enough of a positive voice), and > > the only reason I'm writing this is because I think so much of the project > > and think it has so much potential. The last thing I want to see is for > > PyPy to become a great implemention with many powerful features, but then > > find that it had missed its time by not being "results driven" enough. The > > world doesn't need another powerful research/university language, it needs > > a great production language and with PyPy I think Python could be that > > language. > > > > Anyway, enough of my ranting. I'm sorry if I've offended anyone or > > completely missed the point. I'll go back to being a hopefull lurker > > again! > > > > Cheers, > > Ben > > _______________________________________________ > > pypy-dev at codespeak.net > > http://codespeak.net/mailman/listinfo/pypy-dev > > > From hpk at trillke.net Wed Dec 7 17:50:06 2005 From: hpk at trillke.net (holger krekel) Date: Wed, 7 Dec 2005 17:50:06 +0100 Subject: [pypy-dev] Comments from an observer In-Reply-To: References: <20051207162814.GE10165@solar.trillke.net> Message-ID: <20051207165006.GG10165@solar.trillke.net> Hi Ben, On Wed, Dec 07, 2005 at 16:46 +0000, Ben.Young at risk.sungard.com wrote: > I understand. I probably came across more pessimistic than I actually am. > It's just very easy to get excited by a project like this, and see the > endless possibilities (and not the endless hurdles)! hehe, indeed. i wasn't seeing your posting as pessimistic, btw. but maybe our little thread helps someone who is thinking of "how could i contribute something useful" :) holger From tismer at stackless.com Wed Dec 7 18:21:18 2005 From: tismer at stackless.com (Christian Tismer) Date: Wed, 07 Dec 2005 18:21:18 +0100 Subject: [pypy-dev] Comments from an observer In-Reply-To: References: Message-ID: <43971A0E.3060501@stackless.com> Ben.Young at risk.sungard.com wrote: > Hi Holger, > > I understand. I probably came across more pessimistic than I actually am. > It's just very easy to get excited by a project like this, and see the > endless possibilities (and not the endless hurdles)! Well, one hurdle is too few resources. I'd love to take that direction you proposed, partially, if I can find a PyPy core developer to help my company to fulfill its EU promises. Are you available? :-) -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From Ben.Young at risk.sungard.com Thu Dec 8 10:35:23 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Thu, 8 Dec 2005 09:35:23 +0000 Subject: [pypy-dev] Comments from an observer In-Reply-To: <43971A0E.3060501@stackless.com> Message-ID: pypy-dev-bounces at codespeak.net wrote on 07/12/2005 17:21:18: > Ben.Young at risk.sungard.com wrote: > > Hi Holger, > > > > I understand. I probably came across more pessimistic than I actually am. > > It's just very easy to get excited by a project like this, and see the > > endless possibilities (and not the endless hurdles)! > > Well, one hurdle is too few resources. I'd love to take that > direction you proposed, partially, if I can find a PyPy core > developer to help my company to fulfill its EU promises. > > Are you available? :-) > Believe me, there is nothing I would like more than to hack in and on python all day. Unfortunately I don't think my wife would like me doing all the travelling that seems to be involved. Plus, you'd have to beat a big financial software company on pay ;-) I really will try to find some time to help out. We've been having a very bad few weeks recently, but after Christmas things should quiten down and I may be able to help. I do still really want to come to a sprint. Maybe I can sell it to my company as some sort of training :) Cheers, Ben > -- > Christian Tismer :^) > tismerysoft GmbH : Have a break! Take a ride on Python's > Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ > 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ > work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 > PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 > whom do you want to sponsor today? http://www.stackless.com/ > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > > From Ben.Young at risk.sungard.com Thu Dec 8 10:40:18 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Thu, 8 Dec 2005 09:40:18 +0000 Subject: [pypy-dev] Ooops Message-ID: Sorry about sending that last one to the bounces list. Cheers, Ben From mwh at python.net Thu Dec 8 23:09:09 2005 From: mwh at python.net (Michael Hudson) Date: Thu, 08 Dec 2005 22:09:09 +0000 Subject: [pypy-dev] Re: Comments from an observer References: Message-ID: <2mirtz8cmy.fsf@starship.python.net> This reply is solely to make a couple of points that I don't think have been made yet -- I don't want to give the impression that those points were less important that the ones I mention here. Ben.Young at risk.sungard.com writes: > Also, most people on #pypy seem to ask about using pypy to compile their > simple python programs to c. Now, this doesn't seem like a great deal of > work away (better error messages etc), but they are (politely) told that > this is not what rpython is for. Now if rpython is not for this, why did > you write PyPy in it? Because we needed a description of the Python language that is amenable to analysis. I hope this isn't a new answer to you... > I don't want to come across like a moaner (and indeed, that's why I stop > writing on #pypy as felt I couldn't be enough of a positive voice), and > the only reason I'm writing this is because I think so much of the project > and think it has so much potential. The last thing I want to see is for > PyPy to become a great implemention with many powerful features, but then > find that it had missed its time by not being "results driven" enough. What results do you want? > The world doesn't need another powerful research/university > language, it needs a great production language and with PyPy I think > Python could be that language. Yes, but I want *Python* to be that language, with its multitude of existing libraries and useful dyanmism and all the rest. Have you read this blog post: http://dirtsimple.org/2005/10/children-of-lesser-python.html ? I think I agree with his point that supporting 80% of the language is of much less than 80% of the value. If you have new code to write, then fine, writing it in RPython isn't that bad. But it's the people who want to, e.g., use urllib2 or some old code they wrote last year that I personally am interested in helping, i.e. every single user Python has today. This is why I'm most interested in the JIT and the standard interpreter end of things, not productizing an RPython compiler. Now I'm not and wouldn't want to be speaking for the project as a whole, and I agree that productizing RPython would be a very worthwhile project -- but I'm not going to do it, sorry. I hope that this has at least convinced you that I have no intention of PyPy being a research/university language, either. Cheers, mwh -- I never disputed the Perl hacking skill of the Slashdot creators. My objections are to the editors' taste, the site's ugly visual design, and the Slashdot community's raging stupidity. -- http://www.cs.washington.edu/homes/klee/misc/slashdot.html#faq From seberino at spawar.navy.mil Fri Dec 9 02:33:26 2005 From: seberino at spawar.navy.mil (Christian Seberino) Date: Thu, 08 Dec 2005 17:33:26 -0800 Subject: [pypy-dev] How pypy & all Python implentations test for 'correctness'? Are there standard unit tests somewhere? Message-ID: <1134092006.5180.413.camel@localhost.localdomain> I'm curious how the various python implementations like JPython, IronPython, Pypy, ??? can VERIFY they are not deviating from 'correctness' (whatever that means). Is there a set of unit tests somewhere? Is that the best way to approach the problem? Chris -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: This is a digitally signed message part URL: From cfbolz at gmx.de Fri Dec 9 11:02:15 2005 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Fri, 09 Dec 2005 11:02:15 +0100 Subject: [pypy-dev] How pypy & all Python implentations test for 'correctness'? Are there standard unit tests somewhere? In-Reply-To: <1134092006.5180.413.camel@localhost.localdomain> References: <1134092006.5180.413.camel@localhost.localdomain> Message-ID: <43995627.3050306@gmx.de> Hi Chris! Christian Seberino wrote: > I'm curious how the various python implementations like JPython, > IronPython, Pypy, ??? can VERIFY they are not deviating from > 'correctness' (whatever that means). You forgot CPython. There are some very obscure details where PyPy is actually more correct than CPython: in CPython you cannot subclass str or tuple while adding slots, for no good reason, while you can do that in PyPy. > Is there a set of unit tests somewhere? Yes, there is CPython's compliance-testsuite: http://codespeak.net/svn/pypy/dist/lib-python/2.4.1/test/ It has several drawbacks. One of the biggest is that it does not only test compliance but also some obscure implementation details. Example: The itertools tests contain a test that checks whether izip reuses tuples (which involves mutating them) if izip is the only holder of a reference to that tuple. Such a thing is only reasonable if you have a reference counting garbage collector (which is not part of the language specification). To counter this problem we had to modify several compliance tests to be more "pure". These modified tests can be found at: http://codespeak.net/svn/pypy/dist/lib-python/modified-2.4.1/test/ > Is that the best way to approach the problem? Depends on what you mean with "the problem". If you mean the problem of verifying language implementation correctness: given a good enough set of unit tests it would probably be a good approach. But the current tests are not really good enough. If you mean the problem of writing new Python implementations: The answer is "no", of course. The best way to write a new Python implementation in the future (remember that you are in a PyPy mailing list) is to write another backend for PyPy's translation toolchain :-) Cheers, Carl Friedrich From Ben.Young at risk.sungard.com Fri Dec 9 11:22:23 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Fri, 9 Dec 2005 10:22:23 +0000 Subject: [pypy-dev] Re: Comments from an observer In-Reply-To: <2mirtz8cmy.fsf@starship.python.net> Message-ID: Hi Michael, pypy-dev-bounces at codespeak.net wrote on 08/12/2005 22:09:09: > This reply is solely to make a couple of points that I don't think > have been made yet -- I don't want to give the impression that those > points were less important that the ones I mention here. > > Ben.Young at risk.sungard.com writes: > > > Also, most people on #pypy seem to ask about using pypy to compile their > > simple python programs to c. Now, this doesn't seem like a great deal of > > work away (better error messages etc), but they are (politely) told that > > this is not what rpython is for. Now if rpython is not for this, why did > > you write PyPy in it? > > Because we needed a description of the Python language that is amenable > to analysis. I hope this isn't a new answer to you... I do understand that. It's just that as PyPy is a relatively complicated program it follows that rpython is good for making a lot of python programs amenable to analysis. (Yes as a by-product, but in my opinion an incredibly powerfull and usefull one) > > > I don't want to come across like a moaner (and indeed, that's why I stop > > writing on #pypy as felt I couldn't be enough of a positive voice), and > > the only reason I'm writing this is because I think so much of the project > > and think it has so much potential. The last thing I want to see is for > > PyPy to become a great implemention with many powerful features, but then > > find that it had missed its time by not being "results driven" enough. > > What results do you want? > Sorry, I guess "results driven" came across slightly differently from how I meant it. I guess I meant that PyPy has many parts that with a bit of polish could be useable now in production based scenarios, as people keep asking about in IRC. I.e if people could write extensions, and PyPy was itself a bit faster and more polished then people could start using it now, and upgrade to a JIT version/different backend/thread model etc later. > > The world doesn't need another powerful research/university > > language, it needs a great production language and with PyPy I think > > Python could be that language. > > Yes, but I want *Python* to be that language, with its multitude of > existing libraries and useful dyanmism and all the rest. Have you > read this blog post: > > http://dirtsimple.org/2005/10/children-of-lesser-python.html > > ? I think I agree with his point that supporting 80% of the language > is of much less than 80% of the value. > > If you have new code to write, then fine, writing it in RPython isn't > that bad. But it's the people who want to, e.g., use urllib2 or some > old code they wrote last year that I personally am interested in > helping, i.e. every single user Python has today. This is why I'm > most interested in the JIT and the standard interpreter end of things, > not productizing an RPython compiler. Now I'm not and wouldn't want > to be speaking for the project as a whole, and I agree that > productizing RPython would be a very worthwhile project -- but I'm not > going to do it, sorry. > > I hope that this has at least convinced you that I have no intention > of PyPy being a research/university language, either. > You have convinced me, and I'm glad that you are all so passionate about the project. Again, I didn't want to come across as moaning, and I want to thank you for all the work you have done so far. Cheers, Ben > Cheers, > mwh > > -- > I never disputed the Perl hacking skill of the Slashdot creators. > My objections are to the editors' taste, the site's ugly visual > design, and the Slashdot community's raging stupidity. > -- http://www.cs.washington.edu/homes/klee/misc/slashdot.html#faq > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > > From cfbolz at gmx.de Fri Dec 9 11:56:51 2005 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Fri, 09 Dec 2005 11:56:51 +0100 Subject: [pypy-dev] Re: Comments from an observer In-Reply-To: References: Message-ID: <439962F3.1070004@gmx.de> Hi Ben! Ben.Young at risk.sungard.com wrote: >> Because we needed a description of the Python language that is amenable >> to analysis. I hope this isn't a new answer to you... > > I do understand that. It's just that as PyPy is a relatively complicated > program it follows that rpython is good for making a lot of python > programs amenable to analysis. (Yes as a by-product, but in my opinion an > incredibly powerfull and usefull one) No. RPython is not good for making a lot of Python programs amenable for analysis for several reasons. One of them is that although RPython is a rather nice and powerful language is *is not Python* (although it deceptively looks like it). That means (as Michael already pointed out) that it is really hard to convert an existing program to RPython since the style of programming is just totally different. So all the people saying "yay, I want RPython to speed up my program" will run into deep problems, no matter how much the RPython toolchain will be brushed up. After all, the PyPy standard interpreter was written in RPython from the ground up. The other problem is that you have nearly no standard library available in RPython, which is also something that is not easy to change. And it is easy to forget just how much most python programs are dependent on the stdlib :-). Cheers, Carl Friedrich Bolz From tismer at stackless.com Fri Dec 9 13:56:46 2005 From: tismer at stackless.com (Christian Tismer) Date: Fri, 09 Dec 2005 13:56:46 +0100 Subject: [pypy-dev] Thoughts on Stackless support Message-ID: <43997F0E.6020201@stackless.com> Hi pypy-dev, Richard and I are prototyping stuff for Stackless support. While doing this (only testing stuff, still no start on a module), certain things struck us to be considered: The existing support for yielding the current frame is very poweful, being some one-shot continuation. There is only a tiny wrapper necessary which updates this continuation on every shot, and we have the concept of coroutines in RPython. (Greenlets are almost the same) Instead of just going for Greenlets/Tasklets on application level, I think it makes very much sense to have coroutine support built into RPython all the time. We can implement all kinds of stuff with this, like generators and iterators would be very nice to implement without having to think about state. We just need something that yields, and let the rest be done by the builtin magic. But maybe a much better reason to do this is the ability to write real garbage collectors, because stack unwinding gives us total control over all the objects! Having that said, I now really agree that we should end up rewriting the stackless support as a graph transformation. ATM, things are happening at the very end, without knowledge of the rtyper. But if we really use stack unwinding to implement iterators, it would be great to do this in an explicit way, so that we can apply inlining to it. Explicit support of unwinding would mean to insert the extra checking into the code, and to build state tuples which are chained into the heap emulated stack. It would be great to also have a way to avoid all of this, if iterators get inlined. Right now, I have a hard time imaginating how this would work, because we are using global structures for the chain of stack records. Maybe it is no problem at all. cheers - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From tismer at stackless.com Fri Dec 9 14:58:22 2005 From: tismer at stackless.com (Christian Tismer) Date: Fri, 09 Dec 2005 14:58:22 +0100 Subject: [pypy-dev] Re: Comments from an observer In-Reply-To: <439962F3.1070004@gmx.de> References: <439962F3.1070004@gmx.de> Message-ID: <43998D7E.2080106@stackless.com> Carl Friedrich Bolz wrote: > RPython is not good for making a lot of Python programs amenable for > analysis for several reasons. One of them is that although RPython is a > rather nice and powerful language is *is not Python* (although it > deceptively looks like it). That means (as Michael already pointed out) > that it is really hard to convert an existing program to RPython since > the style of programming is just totally different. It is true that itis hard to concert an existing program to RPython, but it is not about being a totally different coding style. The kind of program one wants to convert to RPython is typically already written in a style towards optimization.Of course, some optimizations for Python are counter-productive in RPython, and the solution is to remove them. You will see what I mean when I do my presentation, tomorrow. Most what I had to do was to avoid type ambiguity, and to avoid features which we don't support, yet. But finally, the programs don't look so different. Partially, they even do look better, because RPython lets us use constructs, which you cannot use in Python, due to current speed limitations. For instance, for accessing certain pieces of data (talking of a simple parser I'm going to show), Python programmers tend to write string slices explicitly, using constants and optimizing their expressions by hand. This kind of optimizations makes Python programs much less readable and reallyraises the questions why they don't better write an extension. With RPython, I don't need to do any of this. Instead, I can afford to write tiny classes to be wrapped around the data I want to analyze, creating a much nicer, more readable and configurable source. This is due to the fact that with constant evaluation and inlining, all these tiny instances are melting away and producing the same high quality code as you would get by explicitly writing slices. > So all the people > saying "yay, I want RPython to speed up my program" will run into deep > problems, no matter how much the RPython toolchain will be brushed up. > After all, the PyPy standard interpreter was written in RPython from the > ground up. I am addressing people who are already looking into writing extension modules. Giving them RPython as a tool to generate this extension module instead of hand-writing it is incredibly valuable. I do agree that this is not a cake walk, and I'm not going to try to make a tool that lets people do this with just any program, automatically. It makes sense to make RPython debugging easier and to get more help out of the tracebacks. Finally I'm not trying to compete with PyPy. Translating real Python is what PyPy is for. On the other hand, there is no point in forcing extension writers to continue their tedious work, after RPython exists. > The other problem is that you have nearly no standard library available > in RPython, which is also something that is not easy to change. And it > is easy to forget just how much most python programs are dependent on > the stdlib :-). Again this is not true. I can use all of the Python libraries from RPython,although I have to write some support code fir this. We wil end up by creating a new object space, which is able to give an abstraction of the existing CPython. There is quite some work to be doneto make this work smoothly. For sure it makes not much sense to use RPython stand-alone. Instead, I'm heading to build extension modules which can call back into any Python code, like CPython extensions are doing all the time. cheers - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From jacob at strakt.com Fri Dec 9 17:49:36 2005 From: jacob at strakt.com (Jacob =?iso-8859-1?q?Hall=E9n?=) Date: Fri, 9 Dec 2005 17:49:36 +0100 Subject: [pypy-dev] Compliance test suite broken Message-ID: <200512091749.36116.jacob@strakt.com> Running the compliance test suite with a testresults directory installed results in the following traceback for (what I presume is) all the tests: Traceback (application-level): File "/home/jacob/src/pypy/dist-pypy/pypy/tool/pytest/regrverbose.py", line 3 in from test import test_support ImportError: cannot import name 'test_support' I'd be happy if someone who understands what broke can fix it, since I'd like to tackle some of the compliancy problems. Jacob From cfbolz at gmx.de Fri Dec 9 23:46:35 2005 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Fri, 09 Dec 2005 23:46:35 +0100 Subject: [pypy-dev] Gothenburg sprint report Message-ID: <439A094B.2080609@gmx.de> Hi PyPy-dev! Finally we have found some time to write a sprint report on what is day three or day five of the sprint depending on how you count. The reason for the uncertainty is that on Monday us lifers^Wfulltimers gathered for two days of EU-report writing ahead of our review meeting in January. We'll spare you the details, but as most of the technical reports are also part of the documention on our website, it's worth mentioning that two documents: http://codespeak.net/pypy/dist/pypy/doc/low-level-encapsulation.html http://codespeak.net/pypy/dist/pypy/doc/translation-aspects.html received significant updates in these days. The former is a more friendly read, start with that one. Consensus opinion is that two days of proof-reading and generally attempting to write nice prose is even more exhausting than coding. It was thus with some relief that on Wednesday morning we met to plan our programming tasks. The task that most combined enormity and urgency was starting work on the the Just-In-Time compiler. Samuele, Armin, Eric, Carl and Arre all worked on this. Eric and Arre implemented yet another interpreter for a very simple toy language as we needed something that would be simpler to apply a JIT to than the PyPy behemoth and then moved on to looking at pypy-c's performance. Carl, Armin and Samuele began writing a partial specializer for low-level graphs, which is to say a way to rewrite a graph in the knowledge that certain values are constants. This is related to the JIT effort because a JIT will partially specialize the interpreter main-loop for a particular code object. The combination of the two sub-tasks will allow us to experiment with ways of applying these specialization techniques. Anders L and Nik repeatedly bashed their heads on the brick wall of issues surrounding a working socket module, and are making good progress although running a useful program is still a way off. The usual platform dependent horrors are, predictably, horrific. Michael and the sole sprint newcomer this time, Johan, implemented support in the translation toolchain for C's "long long" type which involved taking Johan on a whirlwind tour of the annotator, rtyper and C backend. By Thursday lunchtime they had a translated pypy that supported returning a long long value to app-level from exactly one external function (os.lseek) and declared success. Richard and Christian worked on exposing the raw stackless facilities that have existed for some time in a useful way. At this point they have written a 'tasklet' module that is usable by RPython which is probably best considered as an experiment on ways to write a module that can expose stackless features to application level. Adrien and Ludovic worked on increasing the flexibility of our parser and compiler, in particular aiming to allow modification of the syntax at runtime. Their first success was to allow syntax to be *removed* from PyPy: a valuable tool for making web frameworks harder to write:: Clearly, the only way to cut down on the number of Web frameworks is to make it much harder to write them. If Guido were really going to help resolve this Web frameworks mess, he'd make Python a much uglier and less powerful language. I can't help but feel that that's the wrong way to go ;). (from Titus Brown's Advogato diary of 7 Dec 2005: http://www.advogato.org/person/titus/diary.html?start=134) But they have also now allowed the definition of rules that can modify the syntax tree on the fly. On Friday morning a task group reshuffle gave Michael a unique opportunity to work with Armin on the 'JIT' and Carl and Johan became a new '__del__' taskforce. By the end of the day, '__del__' was supported in the genc backend when using the reference counting garbage collector. This also involved changing details at all levels of the translation process, so by now Johan has seen a very little bit of very large amounts of PyPy... Arre and Eric (with some crucial hints from Richard) had a successful hunt for performance problems: they changed about 5 lines of code affecting the way we use the Boehm GC which resulted in a remarkable 30-40% speed up in pystone after translation. Now if they can change 10 lines to get an 80% improvement we _will_ be impressed :) That's all for now. We'll write a report on the last two days, err, sometime. :) Cheers, Michael & Carl Friedrich From mwh at python.net Sat Dec 10 00:11:33 2005 From: mwh at python.net (Michael Hudson) Date: Fri, 09 Dec 2005 23:11:33 +0000 Subject: [pypy-dev] This Week in PyPy 6 Message-ID: <2m7jad987u.fsf@starship.python.net> Introduction ============ This is the sixth of what will hopefully be many summaries of what's been going on in the world of PyPy in the last week. I'd still like to remind people that when something worth summarizing happens to recommend if for "This Week in PyPy" as mentioned on: http://codespeak.net/pypy/dist/pypy/doc/weekly/ where you can also find old summaries. This week features the first IRC summary from Pieter Holtzhausen, a feature that will hopefully continue. There were about 150 commits to the pypy section of codespeak's repository in the last week (a relatively small number for a sprint week -- lots of thinking going on here). The Sprint! =========== This is covered in more detail in the `sprint report`_, but seems to be going well. There has been work on the JIT, supporting larger integers and sockets in RPython, making the stackless option more useful, performance, compiler flexibility, documentation and probably even more. .. _`sprint report`: http://codespeak.net/pipermail/pypy-dev/2005q4/002656.html IRC Summary =========== Thanks again to Pieter for this. We need to talk about formatting :) **Friday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051202:: [00:04] Arigo states it is time to merge the PBC branch. Merging henceforth commences. [15:46] Pedronis and mwh discusses the simplification of the backend selection of the translator. Some translator planning documents checked in later. **Saturday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051203:: [15:45] Stakkars mentions the idea he posted to pypy-dev, that involves the substitution of CPython modules piecewise with pypy generated modules. Pedronis replies that he has thought of a similar approach to integrate pypy and Jython, but that this effort needs to be balanced with the fact that the pypy JIT currently needs attention. **Sunday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051204:: [14:03] Stakkars asks about the necessity of 3 stacks in the l3interpreter that Armin has been working on. One for floats, ints and addresses. After remarks about easier CPU support, Arigo replies that there is simply no sane way to write RPython with a single one. [18:26] Gromit asks how ready pypy is for production usage. He is interested in pypy as a smalltalk-like environment, since he deems objects spaces to be reminiscent of smalltalk vm images. [18:31] Stakkars states that he believes the project should postpone advanced technologies, in favour of getting the groundwork to a level where the project really becomes a CPython alternative. **Monday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051205:: [01:44] Pedronis running counting microbenchmarks, one 4.7 times slower than CPython, the other one 11.3 times. Function calling takes its toll in the latter. **Tuesday, Wednesday**:: [xx:xx] Sprint background radiation. Braintone rings like a bell. Not much to report. **Thursday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051208:: [17:55] Stakkars guess that RPython may get basic coroutine support, and is excited about that. [18:05] Stakkars votes for having stackless enabled all the time. The advantages: - real garbage collection - iterator implementation without clumsy state machines [20:19] Rhamphoryncus wonders whether dynamic specialization (e.g. psyco) can possibly improve memory layout. [20:46] Sabi is glad that long long is now supported (courtesy of mwh and Johahn). He yanks out his work around. EU-related Talks ================ On Monday Holger spoke at a German EU office workshop in Bonn and two days later he, Alastair and Bea spoke at a more union-wide EU workshop in Brussels. Both talks were very well received and while ostensibly we were telling the EU about our project, we gained much immediately useful information about how the EU actually adminsters projects such as ours. -- If you don't use emacs, you're a pathetic, mewling, masochistic weakling and I can't be bothered to convert you. -- Ron Echeverri From arigo at tunes.org Sat Dec 10 15:43:30 2005 From: arigo at tunes.org (Armin Rigo) Date: Sat, 10 Dec 2005 15:43:30 +0100 Subject: [pypy-dev] Compliance test suite broken In-Reply-To: <200512091749.36116.jacob@strakt.com> References: <200512091749.36116.jacob@strakt.com> Message-ID: <20051210144330.GA22487@code1.codespeak.net> Hi Jacob, On Fri, Dec 09, 2005 at 05:49:36PM +0100, Jacob Hall?n wrote: > ImportError: cannot import name 'test_support' This is now fixed (thanks Holger). A bientot, Armin From guenter.jantzen at netcologne.de Sun Dec 11 15:40:52 2005 From: guenter.jantzen at netcologne.de (=?UTF-8?B?R8O8bnRlciBKYW50emVu?=) Date: Sun, 11 Dec 2005 15:40:52 +0100 Subject: [pypy-dev] Re: [pypy-svn] r21001 - pypy/extradoc/Chris EWT-travel In-Reply-To: <20051210135119.19D4227DC8@code1.codespeak.net> References: <20051210135119.19D4227DC8@code1.codespeak.net> Message-ID: <439C3A74.8090801@netcologne.de> Hallo Christian, besser so! Alles was Du hier eincheckst ist f?r jeden Abonennten von pypy-svn lesbar der sich durch die diffs hangelt. Abgesehen davon: * Aber Hallo! * Viele Gr??e aus K?ln und noch eine sch?ne Adventszeit G?nter tismer at codespeak.net schrieb: > Author: tismer > Date: Sat Dec 10 14:51:18 2005 > New Revision: 21001 > > Removed: > pypy/extradoc/Chris EWT-travel/ > Log: > sorry > _______________________________________________ > pypy-svn mailing list > pypy-svn at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-svn > From mwh at python.net Sun Dec 11 21:36:21 2005 From: mwh at python.net (Michael Hudson) Date: Sun, 11 Dec 2005 21:36:21 +0100 Subject: [pypy-dev] gothenburg sprint report #2 Message-ID: <29F29F42-2CA4-4567-B4ED-925D4DAE3F40@python.net> So, the sprint is over, we are on a ferry but we *still* haven't escaped the internet... Christian and Richard spent the last couple of days thinking hard about the many possible ways of exposing stackless features to application-level and by the end of the sprint had pretty much considered them all. This means they will have no choice but to write some code for the mixed module soon :) Armin and his team of helpers (well, mostly Michael and Samuele to be honest) continued to work on JIT-related things, and continued to manufacture both working code and extremely strange bugs. Eventually the Test Driven Development style was halted for a quick but useful Cafe-cake-and-thinking-hard Driven Development moment (also attended by Carl). By the end of the sprint there was support in the abstract interpreter for "virtual structures" and "virtual arrays" which are the abstract interpreter's way of handling objects that are usually allocated on the memory heap but are sufficiently known ahead of time for actual allocation to be unnecessary. Now that probably didn't make much sense, so here's an example: def f(x): l = [x] return l[0] When CPython executes this code it allocates a list, places x into it, extracts it again and throws the list away. When the abstract interpeter sees the statement "l = [x]" it just records the information that "l is a list containing x" so when it sees "l[0]" it already knows what this is -- just "x". Then as nothing in the function ever needed l as a list, it just evaporates. Anders L and Nik continued on the socket module and managed to write a test that contained a simple server and client that could successfully talk to each other (after fighting some mysterious problems that with processes that refused to die). Carl continued his work on __del__, implementing support for it in the C backend when using the Boehm garbage collector which had the minor disadvantage of slowing pypy-c down by a factor of ten, as every instance of a user-defined class was registered with the GC as having a finalizer. This is apparently not something that the Boehm GC appreciates, so on Sunday Carl and Samuele implemented a different strategy. Now only instances of user-defined classes that actually define __del__ (at class definition time, no sneaky cls.__del__ = ... nonsense) get registered as needing finalization. Carl was also awarded his first "obscure Python bug" medal for making CPython dump core when he tried to test a hacky way to implement weakrefs (now SF bug #1377858). Arre and Eric continued their optimization drive and implemented a 'fastcall' path for both functions and methods which accelerates the common case of calling a Python function or method with the correct, fixed number of arguments. This improved the results of a simple function-call benchmark by a factor of about two. Result! A notable feature of this sprint is that Armin and Christian were implementing things very much like other things they had implemented before, painfully, in C -- namely stackless and psyco -- again, but this time in Python, and much more enjoyably :) Even more than this, they both managed to work their way through designs and ideas that had taken months to sort through the first time in days or even hours. We'd love to attribute all this to the magical qualities of Python but practice probably counts for something too :) So another sprint ends, and a productive one at that. As usual, we all need to sleep for a week, or at least until pypy-sync on Thursday... Cheers, mwh & Carl Friedrich From briandorsey at gmail.com Tue Dec 13 00:33:59 2005 From: briandorsey at gmail.com (Brian Dorsey) Date: Mon, 12 Dec 2005 15:33:59 -0800 Subject: [pypy-dev] gothenburg sprint report #2 In-Reply-To: <29F29F42-2CA4-4567-B4ED-925D4DAE3F40@python.net> References: <29F29F42-2CA4-4567-B4ED-925D4DAE3F40@python.net> Message-ID: <66e877b70512121533q6591f616u@mail.gmail.com> I just wanted quickly mention that I greatly appreciate these sprint reports and the weekly summaries! Take care, -Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at vanrietpaap.nl Wed Dec 14 12:21:56 2005 From: eric at vanrietpaap.nl (Eric van Riet Paap) Date: Wed, 14 Dec 2005 12:21:56 +0100 Subject: [pypy-dev] next pypy-sync meeting Message-ID: <23953399-6328-4AF0-A238-FEB99C805830@vanrietpaap.nl> Hello world of PyPy, It is my pleasure to invite you to the next pypy-sync meeting. Time & location: 1.00 - 1.30 pm (GMT+1) at #pypy-sync Regular Topics ==================== - activity reports (3 prepared lines of info) - resolve conflicts/blockers Topics of the week ================================= - pypy-sync meetings attendance - mallorca sprint topics - start of pycon-sprint planning / determine timing (mwh&hpk volunteered to care) - Logic Programming / Aspect Oriented stuff: status and how does it get more visible to the group? Please let me know if there is anything else you feel we should discuss. Cheers, Eric van Riet Paap From mwh at python.net Wed Dec 14 16:47:23 2005 From: mwh at python.net (Michael Hudson) Date: Wed, 14 Dec 2005 15:47:23 +0000 Subject: [pypy-dev] Re: next pypy-sync meeting References: <23953399-6328-4AF0-A238-FEB99C805830@vanrietpaap.nl> Message-ID: <2my82n65pw.fsf@starship.python.net> Eric van Riet Paap writes: > Hello world of PyPy, > > It is my pleasure to invite you to the next pypy-sync meeting. > > Time & location: 1.00 - 1.30 pm (GMT+1) at #pypy-sync I do not expect to be able to attend this (as mentioned on the sprint). I'll be on a train (although if wifi at schiphol is cheap enough I may catch the end of the meeting). > Regular Topics > ==================== > > - activity reports (3 prepared lines of info) LAST WEEK: sprint, recover, small things NEXT WEEK: jit stuff BLOCKERS: coordination/keeping up > - pypy-sync meetings attendance I should be at the next one :) > - mallorca sprint topics > - start of pycon-sprint planning / determine timing (mwh&hpk > volunteered to care) Did I? Cool! :) I guess this mostly means talking to the PyCon organizers at this stage... > Please let me know if there is anything else you feel we should discuss. As usual, I invite suggestions for this week's "This Week in PyPy", though these hardly have to be during the meeting itself. Cheers, mwh -- C is not clean -- the language has _many_ gotchas and traps, and although its semantics are _simple_ in some sense, it is not any cleaner than the assembly-language design it is based on. -- Erik Naggum, comp.lang.lisp From arigo at tunes.org Wed Dec 14 21:01:19 2005 From: arigo at tunes.org (Armin Rigo) Date: Wed, 14 Dec 2005 21:01:19 +0100 Subject: [pypy-dev] LLAbstractInterpreter Message-ID: <20051214200119.GA8695@code1.codespeak.net> Hi all, As you know, the JIT work has started during the sprint. Most of the work went into pypy/jit/llabstractinterpreter.py. As it's still an obscure piece of code, I will try to describe it a bit in this e-mail in a documentation-ish way. Sorry, it's very draftish. -~-~- The LLAbstractInterpreter is a tool that takes low-level flow graphs and converts them into some more low-level flow graphs, by propagating constants and "virtualizing" structures (more about it later). Consider the example of pypy/jit/tl.py. It is a small interpreter for the "Toy Language", a simple stack machine. Here is how the main loop of the interpreter looks like: def interp(code): # 'code' is a bytecode string pc = 0 # position counter code_len = len(code) stack = [] while pc < code_len: opcode = ord(code[pc]) pc += 1 if opcode == PUSH: stack.append( ... ) pc += 1 elif opcode == POP: stack.pop() elif ... I let you imagine how the low-level flow graph of this function looks like: an 'int_lt' for the loop condition, a 'getarrayitem followed by a 'cast_char_to_int' for reading the opcode, an 'int_add' for incrementing the pc, and a lot of comparisons and branches for the long if/elif/... part. The LLAbstractInterpreter takes this graph as input and follows it operation by operation. While doing so, it regenerates a new graph, similar to the input one, by putting there a copy of the operations it encounters (similar to the flow object space). For some of the input graph's variables the LLAbstractInterpreter propagates constant values; it performs the operations between such constants directly, instead of putting them into the new generated graph (again, this is similar to the flow object space). ~-~- Consider the Toy Language example again. We use the LLAbstractInterpreter on the low-level graph of the interp() function above, providing it with a constant Toy Language code string. The LLAbstractInterpreter must be written in a completely general way -- it should not depend on the details of the interp() function, but just use it as input. (It is allowed to give a few hints to guide the process, though.) The goal is to obtain the following effect: 'code', 'code_len' and 'pc' should all be constants, and operations on them should be constant-folded. On the first iteration through the loop, 'opcode' will thus be a constant -- the value of the first opcode. All the if/elif/... branching can also be constant-folded, because the 'opcode' is known. All that's left to be written in the new graph is the actual content of the selected 'if', i.e. the actual bytecode implementation. Then 'pc+1' is a constant too, so on the second iteration through the loop, 'opcode' will again be a constant, and the if/elif/... branching will melt again -- with only the implementation of the second opcode left in the new graph. And so on, iteration after iteration. The loop is unrolled, in the sense that it's not present in the new graph any more, but each unrolled version of the loop is much smaller than the original code because all the dispatching logic disappeared and all the opcode implementations but one are ignored. The delicate part -- the difference with the flow space -- is that to obtain this effect we need more precise control over what occurs when the input graph loops back over itself, or generally when two control flow paths are merged into the same target block. Consider what occurs after one iteration through the 'while' loop in interp(). The value of 'pc' is different: it was 0 at the start of the first iteration, and it is 1 at the start of the second iteration. In this case, the intended goal is to unroll the loop, i.e. follow the same loop body again with a different constant for 'pc'. However, we don't want to unroll *all* loops of the input graph. There are cases where we need to ignore the different constants we get, and generalize them to a variable, so that the graph we generate can contain a loop similar to the original one. For example, if the Toy Language had a FACTORIAL bytecode, it would probably be written as: ...elif opcode == FACTORIAL: n = stack.pop() result = 1 i = 2 while i <= n: result *= i i += 1 stack.append(result) Here, 'i' is 2 at the start of the first iteration, 3 at the start of the next one, then 4, and so on. But still, unrolling this loop makes no sense -- particularly if 'n' is unknown! So there is some delicate adjusting to do. For the short-term future, this will require hints in the input low-level flow graphs (for example, by putting calls to a dummy hint() function in the source code of interp()). What we have so far is two kinds of constants: "concrete" constants and so-called "little" constants. Only the "concrete" constants force duplication of blocks of code; if two different "little" constants reach the same input block, they are merged into a variable. There are heuristics like "when an operation is constant-folded, it produces a concrete constant if at least one of the input arguments was already a concrete constant, and a little constant otherwise". The problem with that approach is that it doesn't really work :-) We can't seem to get the correct effect already in the case of the Toy Language. I will propose a plan for this in the next e-mail, to avoid mixing speculations with the presentation of what we have so far. ~-~ There are two more points to discuss about the current LLAbstractInterpreter. A bit of terminology first. The generated graph is called the "residual" graph, containing "residual operations" -- operations left over by the process. We call "LLAbstractValue" what the LLAbstractInterpreter propagates. Concrete and little constants are LLAbstractValues; so are the residual variables that will be put in the residual graph. A "virtual structure" is another type of LLAbstractValue. When a 'malloc' operation is seen, we propagate for its result a "virtual structure" instead of a variable holding the pointer. The virtual structure describes the constent of all the fields. It is updated when we see 'setfield' operations, and read from when we see 'getfield' operations. Each field is described by a further LLAbstractValue. The goal is similar to (but more powerful than) the malloc removal back-end optimization. Malloc removal can already cope with code such as: def f(n): x = X() x.n = 42 return x.n The 'x' instance does not have to be allocated at all in the residual graph; we can just return 42. But this optimization is particularly interesting in the LLAbstractInterpreter because the loop unrolling and general constant propagation tends to bring together pieces of code that were originally distant. This is what occurs in interp() with the 'stack'. In its original source code, we have no choice but use a memory-allocated list to propagate the values from one bytecode to the next. However, after constant propgatation (say, for 'code=[PUSH N, PUSH M, ADD]'), the residual code looks like: stack = [] stack.append(N) stack.append(M) stack.append(stack.pop()+stack.pop()) return stack.pop() Of course we would like this to become: return N+M We could use malloc removal on the result, but doing it directly in the LLAbstractInterpreter is more "natural" (and more powerful). All we have to do is propagate LLAbstractValues for the stack that describe its the current state: stack = [] # stack: LLVirtualArray([]) stack.append(N) # stack: LLVirtualArray([LLVar(N)]) stack.append(M) # stack: LLVirtualArray([LLVar(N), LLVar(M)]) y = stack.pop() # stack: LLVirtualArray([LLVar(N)]) y: LLVar(M) x = stack.pop() # stack: LLVirtualArray([]) x: LLVar(N) y: LLVar(M) z = x + y # residual operation 'Z=int_add(N,M)' # stack: LLVirtualArray([]) z: LLVar(Z) stack.append(z) # stack: LLVirtualArray([LLVar(Z)]) res = stack.pop() # stack: LLVirtualArray([]) res: LLVar(Z) return res # 'return Z' Note that this example is not really accurate. In fact, both normal constant propagation and virtualization occurs at the same time, so that the code in the left column is not actually generated at all in the case of the Toy Language interpreter: the real left column we would see is the whole interp() function, with its loop unrolled. The states on the right column contain more constants, like the 'pc' and the 'opcode'. So during each iteration through the main 'while' loop, we get a new state that differs from the previous one in more than one way: not only do the 'pc' and 'opcode' constants have different values, but the shape and content of the 'stack' virtual array is different. The point of doing both at the same time is that the virtual structure or array could contain constants that are later read out and used for more constant propagation. We cannot do that with a separate post-processing malloc removal phase. (Also note that RPython lists become actually GcStructs containing a GcArray in the low-level graphs, so the 'stack' is actually a LLVirtualStruct whose 'items' field is itself described as a LLVirtualArray.) -~-~ Phew! A short note about the last noteworthy point of the LLAbstractInterpreter: function calls. Most function calls are actually inlined. The reason is that LLVirtualStructs and LLVirtualArrays are difficult to track across function calls and returns. Indeed, in code like that: def f(): lst = [1] do_something(lst) return lst[0] it could be the case that do_something() actually modifies the list. Propagating information about what is in the list is very difficult across a call. To solve this problem, we just follow the call and continue to look at the operations in do_something(), while still generating code in the same residual graph. When do_something() returns, we come back to the calling point in the input graph, and we continue to - still - generate residual operations in the same, unique residual graph. A call that only passes variables is not inlined nor further analysed, because there would be no point - no constant to propagate. In this case, we just write a residual call operation to the original function. For example, if the hypothetical factorial bytecode were implemented as: ...elif opcode == FACTORIAL: n = stack.pop() stack.append(factorial(n)) Then the residual graph would just contain a direct_call to the unmodified graph of the factorial() function. This is how the LLAbstractInterpreter can hope to cope with very large programs like PyPy: starting from the interpreter main loop, at some point, it just stops following calls because there is nothing more to propagate. The residual graph will just contain calls to the original input graphs (i.e. to "the rest" of PyPy). ~-~-~ A bientot, Armin From arigo at tunes.org Thu Dec 15 09:04:40 2005 From: arigo at tunes.org (Armin Rigo) Date: Thu, 15 Dec 2005 09:04:40 +0100 Subject: [pypy-dev] Re: [pypy-svn] r21163 - pypy/dist/pypy/translator/backendopt In-Reply-To: <20051214224023.1829027B5C@code1.codespeak.net> References: <20051214224023.1829027B5C@code1.codespeak.net> Message-ID: <20051215080440.GA21590@code1.codespeak.net> Hi Carl, On Wed, Dec 14, 2005 at 11:40:23PM +0100, cfbolz at codespeak.net wrote: > Modified: > pypy/dist/pypy/translator/backendopt/inline.py > Log: > make functions that are called exactly once more likely to get inlined. I'm concerned about functions that look like: def _ll_list_resize_ge(l, newsize): if len(l.items) >= newsize: l.length = newsize else: _ll_list_resize_really(l, newsize) It's a stub that we would like to see inlined in its many callers, but the _ll_list_resize_really() should not be inlined. With your new weighting formula, it's likely that _ll_list_resize_really() would get inlined into _ll_list_resize_ge() first, and then the intended effect is lost. (The example is not good because _ll_list_resize_really() is actually called from two other places as well... it's just an example). Maybe it would be better to do two independent passes in auto_inlining(): with the old formula, and then once again -- recomputing the callers/callees as well -- with the modified formula favoring functions that are *still* called only once. A bientot, Armin From cfbolz at gmx.de Thu Dec 15 10:35:00 2005 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Thu, 15 Dec 2005 10:35:00 +0100 Subject: [pypy-dev] Re: [pypy-svn] r21163 - pypy/dist/pypy/translator/backendopt In-Reply-To: <20051215080440.GA21590@code1.codespeak.net> References: <20051214224023.1829027B5C@code1.codespeak.net> <20051215080440.GA21590@code1.codespeak.net> Message-ID: <43A138C4.8080603@gmx.de> Hi Armin! Armin Rigo wrote: > On Wed, Dec 14, 2005 at 11:40:23PM +0100, cfbolz at codespeak.net wrote: > >>Modified: >> pypy/dist/pypy/translator/backendopt/inline.py >>Log: >>make functions that are called exactly once more likely to get inlined. > > > I'm concerned about functions that look like: > > def _ll_list_resize_ge(l, newsize): > if len(l.items) >= newsize: > l.length = newsize > else: > _ll_list_resize_really(l, newsize) > > It's a stub that we would like to see inlined in its many callers, but > the _ll_list_resize_really() should not be inlined. With your new > weighting formula, it's likely that _ll_list_resize_really() would get > inlined into _ll_list_resize_ge() first, and then the intended effect is > lost. (The example is not good because _ll_list_resize_really() is > actually called from two other places as well... it's just an example). > > Maybe it would be better to do two independent passes in > auto_inlining(): with the old formula, and then once again -- > recomputing the callers/callees as well -- with the modified formula > favoring functions that are *still* called only once. I agree. I did this checkin mostly because I wanted the new inlining tested on snake, since my own machine gave incoherent results. I should have made that more clear. On the other hand, with the new inlining we seem to get a slight speedup (from 9.22x to 8.31x slower than CPython on pystone, from 8.15x to 8.16x slower than CPython with Richards), so maybe we can get even more when doing this with your proposed method. Cheers, Carl Friedrich From arigo at tunes.org Thu Dec 15 15:37:30 2005 From: arigo at tunes.org (Armin Rigo) Date: Thu, 15 Dec 2005 15:37:30 +0100 Subject: [pypy-dev] LLAbstractInterpreter In-Reply-To: <20051214200119.GA8695@code1.codespeak.net> References: <20051214200119.GA8695@code1.codespeak.net> Message-ID: <20051215143730.GA29298@code1.codespeak.net> Hi again, On Wed, Dec 14, 2005 at 09:01:19PM +0100, Armin Rigo wrote: > There are heuristics like "when an operation is constant-folded, it > produces a concrete constant if at least one of the input arguments was > already a concrete constant, and a little constant otherwise". The > problem with that approach is that it doesn't really work :-) We can't > seem to get the correct effect already in the case of the Toy Language. > I will propose a plan for this in the next e-mail, to avoid mixing > speculations with the presentation of what we have so far. Of course it's unclear at the moment how well different solutions would work, but here is one proposed approach... The crucial place in the interp() function where we need to have a constant is the 'opcode' variable, used in the chain of if/elif. So let's write it down explicitely as a hint: def interp(code): pc = 0 code_len = len(code) stack = [] while pc < code_len: opcode = ord(code[pc]) pc += 1 opcode = hint(opcode, concrete=True) # hint! if opcode == PUSH: stack.append( ... ) pc += 1 elif opcode == POP: stack.pop() elif ... The new function hint() would turn into an operation in the LL flow graph, e.g.: v2 = hint(v1, Constant(dict_of_keywords)) It is mostly the same as the 'same_as' operation (:-), but it is recognized by the LLAbstractInterpreter as meaning that we want to force 'v2' to be a constant. Here are the proposed propagation rules. For now, let's consider we have LLVariables and LLConstants -- only one kind of constant, of the "concrete" kind, i.e. forcing loop unrolling. (There is no implicit merging with generalization of constants to variables in this model.) By default, mostly everything is LLVariables, just like they are Variables in the input flow graph. In the code sample above, this includes 'pc' (but not 'code', for which we can pass an LLConstant argument). Then the 'opcode' that reaches the call to hint() is also a LLVariable. That's a problem: how to turn a variable into a concrete constant? The idea is to give up, but track back the origin of the variable. Let's look at (part of) the LL graph of interp(): (block 1) | | (pc=0) V ,--> (block 2) | v1 = int_lt(pc, code_len) | exitswitch v1 | | . | True . V (block 3) v2 = getarrayitem(code, pc) v3 = cast_char_to_int(v2) v4 = hint(v3, ({'concrete': True})) If v3 receives a LLVariable, then we track it back: it comes from an operation involving 'v2', which itself involves 'code' and 'pc'. Here 'code' is a LLConstant, but 'pc' comes from the previous block. In the path we followed entering this block, 'pc' comes from a constant in the flow graph. So far, the state at the start of block 2 looks like this: code: LLConstant("...") code_len: LLConstant(len(code)) pc: LLVariable() So the hint() operation would re-link the residual block corresponding to "block 1" to a new residual "block 2" with a more precise state: code: LLConstant("...") code_len: LLConstant(len(code)) pc: LLConstant(0) And then we continue again from this new state, which should result in 'v3' being an LLConstant this time. The complete story probably still requires two kinds of constants: hint() would return a "concrete" constant, but the 'pc' in the state would only become a "little" constant. I think that the original way the LLAbstractInterpreter propagates them (before more rules were added) would be just what we need here: * a "concrete" constant cannot be merged; * constant-folded operations produce a "concrete" constant if any input argument is a "concrete" constant; * "little" constants become variables as soon as they go to the next block, so we don't have to worry about generalizing two little constants at a merge point. The hint() operation would hack the state of block 2 to make 'pc' a *little* constant, so that it stays a constant for the minimal amount of time and doesn't propagate everywhere. So after 'pc' has been forced to a little constant in block 2, it would still revert to a variable in block 3; but then the hint() would fail again, and this time it would fix the state of block 3 so that 'pc' is a little constant in there as well. (We can think later about making this tedious step-by-step constantification not too slow, if needed.) For reference, an implicit goal I'm trying to address is to avoid having constants propagate more than expected. Currently, we get the bad effect of TL instructions like "PUSH 42" storing a concrete constant in the value stack, which prevents merges. It is a concrete constant at the moment because the value '42' is just the ord() of the next character in the bytecode. Under the new proposal, neither 'code' nor 'pc' would be concrete constants, so the 'ord(code[pc])' that retrieve 42 would not return a concrete constant either. Only 'opcode' would be a concrete constant, plus the result of the computations done directly with this value -- which is just the results of the comparisons in the if/elif statements. I expect some amount of propagation of the little constants will soon be needed again, with its automatic generalization logic, but that should be reintroduced in a second step. -#-#- A bientot, Armin From njriley at uiuc.edu Fri Dec 16 03:38:25 2005 From: njriley at uiuc.edu (Nicholas Riley) Date: Thu, 15 Dec 2005 20:38:25 -0600 Subject: [pypy-dev] Patch: TSC support Message-ID: <20051216023825.GA14107@uiuc.edu> The attached patch provides an equivalent of CPython's --with-tsc for pypy-c on x86, PowerPC and Alpha, for Linux and Mac OS X (where available). You enable it with the --tsc flag and use sys.settscdump just as in CPython. This was mostly "my first PyPy project", written as a learning experience; I'm posting it in case it would be useful for others. -- Nicholas Riley | -------------- next part -------------- diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/interpreter/executioncontext.py pypy-njr/pypy/interpreter/executioncontext.py --- pypy-dist/pypy/interpreter/executioncontext.py Tue Dec 6 15:37:07 2005 +++ pypy-njr/pypy/interpreter/executioncontext.py Thu Dec 15 17:25:35 2005 @@ -1,4 +1,4 @@ -import sys +import os, sys from pypy.interpreter.miscutils import Stack from pypy.interpreter.error import OperationError @@ -15,6 +15,9 @@ self.ticker = 0 self.compiler = space.createcompiler() + self.tscdump = False + self.ticked = False + def enter(self, frame): if self.framestack.depth() > self.space.sys.recursionlimit: raise OperationError(self.space.w_RuntimeError, @@ -59,11 +62,21 @@ def bytecode_trace(self, frame): "Trace function called before each bytecode." + if self.tscdump: + from pypy.rpython.rtsc import read_diff, reset_diff + code = getattr(frame, 'pycode') + opcode = code is None and 0 or ord(code.co_code[frame.next_instr]) + os.write(2, 'opcode=%d t=%d inst=%d\n' % + (opcode, int(self.ticked), read_diff())) + self.ticked = False + reset_diff() + # First, call yield_thread() before each Nth bytecode, # as selected by sys.setcheckinterval() ticker = self.ticker if ticker <= 0: self.space.threadlocals.yield_thread() + self.ticked = True ticker = self.space.sys.checkinterval self.ticker = ticker - 1 @@ -142,6 +155,13 @@ else: self.w_profilefunc = w_func + def settscdump(self, w_bool): + from pypy.rpython.objectmodel import we_are_translated + if not we_are_translated(): + raise OperationError(self.space.w_NotImplementedError, + self.space.wrap("No access to timestamp counter in untranslated PyPy")) + self.tscdump = self.space.is_true(w_bool) + def call_tracing(self, w_func, w_args): is_tracing = self.is_tracing self.is_tracing = 0 diff -uNr -x '*~' -x '*.pyc' -x _cache pypy-dist/pypy/module/sys/__init__.py pypy-njr/pypy/module/sys/__init__.py --- pypy-dist/pypy/module/sys/__init__.py Tue Dec 6 15:37:15 2005 +++ pypy-njr/pypy/module/sys/__init__.py Mon Dec 12 00:56:39 2005 @@ -45,6 +45,7 @@ 'settrace' : 'vm.settrace', 'setprofile' : 'vm.setprofile', 'call_tracing' : 'vm.call_tracing', + 'settscdump' : 'vm.settscdump', 'executable' : 'space.wrap("py.py")', 'copyright' : 'space.wrap("MIT-License")', diff -uNr -x '*~' -x '*.pyc' -x _cache pypy-dist/pypy/module/sys/vm.py pypy-njr/pypy/module/sys/vm.py diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/module/sys/vm.py pypy-njr/pypy/module/sys/vm.py --- pypy-dist/pypy/module/sys/vm.py Tue Dec 6 15:37:15 2005 +++ pypy-njr/pypy/module/sys/vm.py Wed Dec 14 23:34:36 2005 @@ -113,3 +113,11 @@ saved, and restored afterwards. This is intended to be called from a debugger from a checkpoint, to recursively debug some other code.""" return space.getexecutioncontext().call_tracing(w_func, w_args) + +def settscdump(space, w_bool): + """settscdump(bool) + +If true, tell the Python interpreter to dump VM measurements to +stderr. If false, turn off dump. The measurements are based on the +processor's time-stamp counter.""" + space.getexecutioncontext().settscdump(w_bool) diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/module/trans/interp_trans.py pypy-njr/pypy/module/trans/interp_trans.py diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/rpython/extfunctable.py pypy-njr/pypy/rpython/extfunctable.py --- pypy-dist/pypy/rpython/extfunctable.py Wed Dec 14 22:03:19 2005 +++ pypy-njr/pypy/rpython/extfunctable.py Thu Dec 15 17:11:50 2005 @@ -231,6 +231,13 @@ 'll_stackless/switch')) # ___________________________________________________________ +# timestamp counter +from pypy.rpython import rtsc +declare(rtsc.read, r_longlong, 'll_tsc/read') +declare(rtsc.read_diff, int, 'll_tsc/read_diff') +declare(rtsc.reset_diff, noneannotation, 'll_tsc/reset_diff') + +# ___________________________________________________________ # the exceptions that can be implicitely raised by some operations standardexceptions = { TypeError : True, diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/rpython/module/ll_trans.py pypy-njr/pypy/rpython/module/ll_trans.py --- pypy-dist/pypy/rpython/module/ll_tsc.py Wed Dec 31 18:00:00 1969 +++ pypy-njr/pypy/rpython/module/ll_tsc.py Wed Dec 14 23:27:36 2005 @@ -0,0 +1,13 @@ +from pypy.rpython.rarithmetic import r_longlong + +def ll_tsc_read(): + return r_longlong(0) +ll_tsc_read.suggested_primitive = True + +def ll_tsc_read_diff(): + return 0 +ll_tsc_read_diff.suggested_primitive = True + +def ll_tsc_reset_diff(): + pass +ll_tsc_reset_diff.suggested_primitive = True diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/rpython/rtsc.py pypy-njr/pypy/rpython/rtsc.py --- pypy-dist/pypy/rpython/rtsc.py Wed Dec 31 18:00:00 1969 +++ pypy-njr/pypy/rpython/rtsc.py Wed Dec 14 23:28:16 2005 @@ -0,0 +1,8 @@ +def read(): + raise NotImplementedError("only works in translated versions") + +def read_diff(): + raise NotImplementedError("only works in translated versions") + +def reset_diff(): + raise NotImplementedError("only works in translated versions") diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/rpython/test/test_rpbc.py pypy-njr/pypy/rpython/test/test_rpbc.py --- pypy-dist/pypy/translator/c/extfunc.py Fri Dec 9 16:19:28 2005 +++ pypy-njr/pypy/translator/c/extfunc.py Wed Dec 14 23:20:08 2005 @@ -5,7 +5,7 @@ from pypy.rpython.rstr import STR from pypy.rpython import rlist from pypy.rpython.module import ll_os, ll_time, ll_math, ll_strtod -from pypy.rpython.module import ll_stackless, ll_stack +from pypy.rpython.module import ll_stackless, ll_stack, ll_tsc from pypy.module.thread.rpython import ll_thread from pypy.module._socket.rpython import ll__socket @@ -53,6 +54,9 @@ ll_thread.ll_releaselock: 'LL_thread_releaselock', ll_thread.ll_thread_start: 'LL_thread_start', ll_thread.ll_thread_get_ident: 'LL_thread_get_ident', + ll_tsc.ll_tsc_read: 'LL_tsc_read', + ll_tsc.ll_tsc_read_diff: 'LL_tsc_read_diff', + ll_tsc.ll_tsc_reset_diff:'LL_tsc_reset_diff', ll_stackless.ll_stackless_switch: 'LL_stackless_switch', ll_stackless.ll_stackless_stack_frames_depth: 'LL_stackless_stack_frames_depth', ll_stack.ll_stack_unwind: 'LL_stack_unwind', diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/translator/c/genc.py pypy-njr/pypy/translator/c/genc.py --- pypy-dist/pypy/translator/c/genc.py Tue Dec 6 15:37:20 2005 +++ pypy-njr/pypy/translator/c/genc.py Thu Dec 15 17:11:53 2005 @@ -19,11 +19,12 @@ symboltable = None stackless = False - def __init__(self, translator, entrypoint, gcpolicy=None, libraries=None, thread_enabled=False): + def __init__(self, translator, entrypoint, gcpolicy=None, libraries=None, thread_enabled=False, tsc_enabled=False): self.translator = translator self.entrypoint = entrypoint self.gcpolicy = gcpolicy self.thread_enabled = thread_enabled + self.tsc_enabled = tsc_enabled if libraries is None: libraries = [] @@ -73,6 +74,8 @@ else: if self.stackless: defines['USE_STACKLESS'] = '1' + if self.tsc_enabled: + defines['USE_TSC'] = '1' cfile, extra = gen_source_standalone(db, modulename, targetdir, entrypointname = pfname, defines = defines) diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/translator/c/src/g_include.h pypy-njr/pypy/translator/c/src/g_include.h --- pypy-dist/pypy/translator/c/src/g_include.h Fri Dec 9 05:04:34 2005 +++ pypy-njr/pypy/translator/c/src/g_include.h Tue Dec 13 22:43:40 2005 @@ -40,6 +40,7 @@ # include "src/ll_thread.h" # include "src/ll_stackless.h" # include "src/ll__socket.h" +# include "src/ll_tsc.h" #endif #include "src/stack.h" diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/translator/c/src/ll_trans.h pypy-njr/pypy/translator/c/src/ll_trans.h diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/translator/c/src/ll_tsc.h pypy-njr/pypy/translator/c/src/ll_tsc.h --- pypy-dist/pypy/translator/c/src/ll_tsc.h Wed Dec 31 18:00:00 1969 +++ pypy-njr/pypy/translator/c/src/ll_tsc.h Wed Dec 14 23:25:01 2005 @@ -0,0 +1,85 @@ +/************************************************************/ +/*** C header subsection: timestamp counter access ***/ + + +#if defined(USE_TSC) + +typedef unsigned long long uint64; + +/* prototypes */ + +uint64 LL_tsc_read(void); +long LL_tsc_read_diff(void); +void LL_tsc_reset_diff(void); + +/* implementations */ + +#ifndef PYPY_NOT_MAIN_FILE + +#if defined(__alpha__) + +#define rdtscll(pcc) asm volatile ("rpcc %0" : "=r" (pcc)) + +#elif defined(__ppc__) + +#define rdtscll(var) ppc_getcounter(&var) + +static void +ppc_getcounter(uint64 *v) +{ + register unsigned long tbu, tb, tbu2; + + loop: + asm volatile ("mftbu %0" : "=r" (tbu) ); + asm volatile ("mftb %0" : "=r" (tb) ); + asm volatile ("mftbu %0" : "=r" (tbu2)); + if (__builtin_expect(tbu != tbu2, 0)) goto loop; + + ((long*)(v))[0] = tbu; + ((long*)(v))[1] = tb; +} + +#else /* this section is for linux/x86 */ + +#define rdtscll(val) asm volatile ("rdtsc" : "=A" (val)) + +#endif + +uint64 +LL_tsc_read(void) +{ + uint64 tsc; + rdtscll(tsc); + + return tsc; +} + +static uint64 tsc_last = 0; + +/* don't use for too long a diff, overflow problems: + http://www.sandpile.org/post/msgs/20003444.htm */ + +long +LL_tsc_read_diff(void) +{ + uint64 new_tsc; + unsigned long tsc_diff; + + /* returns garbage the first time you call it */ + rdtscll(new_tsc); + tsc_diff = new_tsc - tsc_last; + tsc_last = new_tsc; + + return tsc_diff; +} + +void +LL_tsc_reset_diff(void) +{ + rdtscll(tsc_last); +} + +#endif /* PYPY_NOT_MAIN_FILE */ + +#endif /* defined(USE_TSC) */ + diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/translator/goal/driver.py pypy-njr/pypy/translator/goal/driver.py --- pypy-dist/pypy/translator/goal/driver.py Tue Dec 6 15:37:19 2005 +++ pypy-njr/pypy/translator/goal/driver.py Thu Dec 15 17:11:52 2005 @@ -1,4 +1,4 @@ -import sys, os + from pypy.translator.translator import TranslationContext from pypy.translator.tool.taskengine import SimpleTaskEngine @@ -20,6 +20,7 @@ 'thread': False, # influences GC policy 'stackless': False, + 'tsc': False, 'debug': True, 'insist': False, 'backend': 'c', @@ -208,7 +209,8 @@ from pypy.translator.c.genc import CExtModuleBuilder as CBuilder cbuilder = CBuilder(self.translator, self.entry_point, gcpolicy = gcpolicy, - thread_enabled = getattr(opt, 'thread', False)) + thread_enabled = getattr(opt, 'thread', False), + tsc_enabled = getattr(opt, 'tsc', False)) cbuilder.stackless = opt.stackless database = cbuilder.build_database() self.log.info("database for generating C source was created") diff -uNr -x '*~' -x '*.pyc' -x '_*' pypy-dist/pypy/translator/goal/run_pypy-llvm.sh pypy-njr/pypy/translator/goal/run_pypy-llvm.sh --- pypy-dist/pypy/translator/goal/translate_pypy.py Sat Dec 10 21:57:29 2005 +++ pypy-njr/pypy/translator/goal/translate_pypy.py Thu Dec 15 17:11:52 2005 @@ -50,6 +51,8 @@ '2_gc': [OPT(('--gc',), "Garbage collector", ['boehm', 'ref', 'none'])], '3_stackless': [OPT(('--stackless',), "Stackless code generation", True)], + '4_tsc': [OPT(('--tsc',), "(x86, PowerPC, Alpha) Timestamp counter profile", + True)], }, @@ -101,6 +105,7 @@ 'gc': 'boehm', 'backend': 'c', 'stackless': False, + 'tsc': False, 'batch': False, 'text': False, From Ben.Young at risk.sungard.com Fri Dec 16 14:19:00 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Fri, 16 Dec 2005 13:19:00 +0000 Subject: [pypy-dev] call_function Message-ID: Hi All, I was having a quick look at how function calling is implemented in PyPy and I can't seem to understand why the CALL_FUNCTION opcode and the space call_function method seem to do duplicate things. Whay doesn't CALL_FUNCTION just pop oparg items off the stack and call the space call_function? It doesn't seem like the functionality would be any different as the varargs cases are handled differently anyway? Cheers, Ben From cfbolz at gmx.de Fri Dec 16 15:08:38 2005 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Fri, 16 Dec 2005 15:08:38 +0100 Subject: [pypy-dev] call_function In-Reply-To: References: Message-ID: <43A2CA66.3050402@gmx.de> Hi Ben! > I was having a quick look at how function calling is implemented in PyPy > and I can't seem to understand why the CALL_FUNCTION opcode and the space > call_function method seem to do duplicate things. > > Whay doesn't CALL_FUNCTION just pop oparg items off the stack and call the > space call_function? It doesn't seem like the functionality would be any > different as the varargs cases are handled differently anyway? Well, the way we handle function arguments is definitively a bit convoluted. But for this case what you plan would not suffice: the oparg argument to the CALL_FUNCTION bytecode is not just the number of arguments of that function. Quoting from the CPython docs (http://python.org/doc/2.4.2/lib/bytecodes.html): CALL_FUNCTION argc Calls a function. The low byte of argc indicates the number of positional parameters, the high byte the number of keyword parameters. On the stack, the opcode finds the keyword parameters first. For each keyword argument, the value is on top of the key. Below the keyword parameters, the positional parameters are on the stack, with the right-most parameter on top. Below the parameters, the function object to call is on the stack. That is why the fast paths in our CALL_FUNCTION implementation work: the test oparg == 1 tests that the function has exactly one positional and no keyword argument. Cheers, Carl Friedrich From Ben.Young at risk.sungard.com Fri Dec 16 15:23:56 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Fri, 16 Dec 2005 14:23:56 +0000 Subject: [pypy-dev] call_function In-Reply-To: <43A2CA66.3050402@gmx.de> Message-ID: Carl Friedrich Bolz wrote on 16/12/2005 14:08:38: > Hi Ben! > > > I was having a quick look at how function calling is implemented in PyPy > > and I can't seem to understand why the CALL_FUNCTION opcode and the space > > call_function method seem to do duplicate things. > > > > Whay doesn't CALL_FUNCTION just pop oparg items off the stack and call the > > space call_function? It doesn't seem like the functionality would be any > > different as the varargs cases are handled differently anyway? > > Well, the way we handle function arguments is definitively a bit > convoluted. But for this case what you plan would not suffice: the oparg > argument to the CALL_FUNCTION bytecode is not just the number of > arguments of that function. Quoting from the CPython docs > (http://python.org/doc/2.4.2/lib/bytecodes.html): > > CALL_FUNCTION argc > Calls a function. The low byte of argc indicates the number of > positional parameters, the high byte the number of keyword parameters. > On the stack, the opcode finds the keyword parameters first. For each > keyword argument, the value is on top of the key. Below the keyword > parameters, the positional parameters are on the stack, with the > right-most parameter on top. Below the parameters, the function object > to call is on the stack. > > That is why the fast paths in our CALL_FUNCTION implementation work: the > test oparg == 1 tests that the function has exactly one positional and > no keyword argument. > Well, you always check the high byte is zero and then call down. It just seems like then fact that there are 4 fastcall "hacks" doesn't need to be known at this level (as it is already known at the level below (i.e in the space)) Thanks for the reply. Cheers, Ben > Cheers, > > Carl Friedrich > > > From Ben.Young at risk.sungard.com Fri Dec 16 15:26:03 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Fri, 16 Dec 2005 14:26:03 +0000 Subject: [pypy-dev] call_function In-Reply-To: Message-ID: pypy-dev-bounces at codespeak.net wrote on 16/12/2005 14:23:56: > Carl Friedrich Bolz wrote on 16/12/2005 14:08:38: > > > Hi Ben! > > > > > I was having a quick look at how function calling is implemented in > PyPy > > > and I can't seem to understand why the CALL_FUNCTION opcode and the > space > > > call_function method seem to do duplicate things. > > > > > > Whay doesn't CALL_FUNCTION just pop oparg items off the stack and call > the > > > space call_function? It doesn't seem like the functionality would be > any > > > different as the varargs cases are handled differently anyway? > > > > Well, the way we handle function arguments is definitively a bit > > convoluted. But for this case what you plan would not suffice: the oparg > > argument to the CALL_FUNCTION bytecode is not just the number of > > arguments of that function. Quoting from the CPython docs > > (http://python.org/doc/2.4.2/lib/bytecodes.html): > > > > CALL_FUNCTION argc > > Calls a function. The low byte of argc indicates the number of > > positional parameters, the high byte the number of keyword parameters. > > On the stack, the opcode finds the keyword parameters first. For each > > keyword argument, the value is on top of the key. Below the keyword > > parameters, the positional parameters are on the stack, with the > > right-most parameter on top. Below the parameters, the function object > > to call is on the stack. > > > > That is why the fast paths in our CALL_FUNCTION implementation work: the > > test oparg == 1 tests that the function has exactly one positional and > > no keyword argument. > > > > Well, you always check the high byte is zero and then call down. It just Sorry, I meant "you could always check" here. > seems like then fact that there are 4 fastcall "hacks" doesn't need to be > known at this level (as it is already known at the level below (i.e in the > space)) > > Thanks for the reply. Cheers, Ben > > Cheers, > Ben > > > Cheers, > > > > Carl Friedrich > > > > > > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > > From arigo at tunes.org Fri Dec 16 17:19:52 2005 From: arigo at tunes.org (Armin Rigo) Date: Fri, 16 Dec 2005 17:19:52 +0100 Subject: [pypy-dev] call_function In-Reply-To: References: Message-ID: <20051216161952.GA26937@code1.codespeak.net> Hi Ben, On Fri, Dec 16, 2005 at 01:19:00PM +0000, Ben.Young at risk.sungard.com wrote: > Whay doesn't CALL_FUNCTION just pop oparg items off the stack and call the > space call_function? It doesn't seem like the functionality would be any > different as the varargs cases are handled differently anyway? But CALL_FUNCTION also handles keyword arguments, which space.call_function() does not. Armin From Ben.Young at risk.sungard.com Fri Dec 16 17:24:09 2005 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Fri, 16 Dec 2005 16:24:09 +0000 Subject: [pypy-dev] call_function In-Reply-To: <20051216161952.GA26937@code1.codespeak.net> Message-ID: Armin Rigo wrote on 16/12/2005 16:19:52: > Hi Ben, > > On Fri, Dec 16, 2005 at 01:19:00PM +0000, Ben.Young at risk.sungard.com wrote: > > Whay doesn't CALL_FUNCTION just pop oparg items off the stack and call the > > space call_function? It doesn't seem like the functionality would be any > > different as the varargs cases are handled differently anyway? > > But CALL_FUNCTION also handles keyword arguments, which > space.call_function() does not. > > Sorry, I meant: def CALL_FUNCTION(f, oparg): if oparg & 0xff == oparg: w_function = f.valuestack.pop() #pseudo code w_args = f.valuestack.pop(oparg) w_result = f.space.call_function(w_function, *w_args) f.valuestack.push(w_result) else: # general case f.call_function(oparg) Doesn't this do the same thing? f.space.call_function seems to be able to handle any number of positional args. Cheers, Ben > Armin > From arigo at tunes.org Fri Dec 16 17:56:38 2005 From: arigo at tunes.org (Armin Rigo) Date: Fri, 16 Dec 2005 17:56:38 +0100 Subject: [pypy-dev] call_function In-Reply-To: References: <20051216161952.GA26937@code1.codespeak.net> Message-ID: <20051216165638.GA27885@code1.codespeak.net> Hi Ben, On Fri, Dec 16, 2005 at 04:24:09PM +0000, Ben.Young at risk.sungard.com wrote: > def CALL_FUNCTION(f, oparg): > if oparg & 0xff == oparg: > w_function = f.valuestack.pop() > #pseudo code > w_args = f.valuestack.pop(oparg) > w_result = f.space.call_function(w_function, *w_args) > f.valuestack.push(w_result) > else: > # general case > f.call_function(oparg) > > Doesn't this do the same thing? f.space.call_function seems to be able to > handle any number of positional args. Well, the real code you are quoting contains comment "XXX start of hack for performance" and "XXX end of hack for performance" around the first part. Indeed, none of this is necessary. It does give a good performance boost, though, because it avoids having to build a list from the arguments popped off the start. If you are confused by space.call_function() containing *again* a list of if/elif fast paths, remember that in the C version there are different versions of space.call_function() created, one for each number of argument (we do this for all *arg functions). In each version, each if/elif condition is known at compile-time, so only one of the branches is left. It's not actually testing again the number of arguments at run-time. A bientot, Armin. From eric at vanrietpaap.nl Sat Dec 17 12:39:53 2005 From: eric at vanrietpaap.nl (Eric van Riet Paap) Date: Sat, 17 Dec 2005 12:39:53 +0100 Subject: [pypy-dev] pypy-sync dec 15th summary Message-ID: <376296EC-8E78-409F-9E06-9F79533EFA1B@vanrietpaap.nl> For most people last week was mostly for recovering from the Gothernburg sprint. The JIT and optimization effort seem to be the most active topics at the moment. The topics for the Mallorca sprint are still a bit in flux because it's unsure what gets done before then. These are the topics that were mentioned during this meeting: * JIT * stackless (although Christian hopes to have it finished by then) * _socket module * optimizations * logic programming * improve external function interactions / machinery On the subject of Logic Programming / Aspect Oriented, the main developers indicated they will commit to the repository to make it more visible to the rest of the group. It was noted that the owl parser is about half finished. A suggestion was made that some plan/status/documentation about these subjects in pypy/doc would help others to get a clearer picture. I was under the impression that the response was "yes, good idea. We'll do that" :) Until next week! Eric. Full log can be found at http://tismerysoft.de/pypy/irc-logs/pypy- sync/%23pypy-sync.log.20051215 From olivier.dormond at gmail.com Mon Dec 19 15:05:24 2005 From: olivier.dormond at gmail.com (Olivier Dormond) Date: Mon, 19 Dec 2005 15:05:24 +0100 Subject: [pypy-dev] Greenlet and socket Message-ID: Hi everyone, Armin just told me that some dev was ongoing about getting a socket implementation using some basic coroutines like principle. I'm really interested by this and I've even written a veeery simple test programme using the greenlet that makes using sockets in an synchronous way, the simplest one, interract as if they were in separate threads. Is this new wonderfull implementation going along providing something that looks like the following ? That would really be a tremendous feature to have in a high-level language :-) Cheers, Olivier #!/usr/bin/env python from select import select from socket import * from py.magic import greenlet iwt = [] owt = [] ewt = [] class greensocket(object): def __init__(self, family, type=SOCK_STREAM, proto=0): self._family = family self._socket = socket(family, type, proto) self._socket.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) def __getattr__(self, attr): return getattr(self._socket, attr) def accept(self): global iwt if not self in iwt: iwt.append(self) self._green = greenlet.getcurrent() greenlet.getcurrent().parent.switch() iwt.remove(self) s, addr = self._socket.accept() sock = greensocket(self._family) sock._socket = s return sock, addr def close(self): global iwt try: iwt.remove(self) except ValueError: pass self._socket.close() def recv(self, buffersize, flags=0): global iwt if not self in iwt: iwt.append(self) self._green = greenlet.getcurrent() greenlet.getcurrent().parent.switch() iwt.remove(self) return self._socket.recv(buffersize, flags) def switch(self, *args): self._green.switch(*args) @greenlet def printer(): s = greensocket(AF_INET, SOCK_STREAM) s.bind(('0.0.0.0', 10000)) s.listen(1) client, address = s.accept() while 1: line = client.recv(1024) line = line.strip() print "client says:", line client.sendall("done\n") @greenlet def computer(): s = greensocket(AF_INET, SOCK_STREAM) s.bind(('0.0.0.0', 10001)) s.listen(1) client, address = s.accept() while 1: line = client.recv(1024) line = line.strip() print "computing:", line client.sendall("result: "+`eval(line)`+"\n") def main_loop(): while 1: rs, ws, es = select(iwt, owt, ewt) for r in rs: r.switch() for w in ws: w.switch() for e in es: e.switch() print "switching on the printer" printer.switch() print "switching on the computer" computer.switch() print "switching on the main loop" main_loop() From mahs at telcopartners.com Tue Dec 20 23:43:59 2005 From: mahs at telcopartners.com (Michael Spencer) Date: Tue, 20 Dec 2005 14:43:59 -0800 Subject: [pypy-dev] What happened...this week in PyPy? Message-ID: Inquiring minds want to know ;-) Michael From lac at strakt.com Wed Dec 21 01:15:58 2005 From: lac at strakt.com (Laura Creighton) Date: Wed, 21 Dec 2005 01:15:58 +0100 Subject: [pypy-dev] What happened...this week in PyPy? In-Reply-To: Message from Michael Spencer of "Tue, 20 Dec 2005 14:43:59 PST." References: Message-ID: <200512210015.jBL0Fwpj021329@theraft.strakt.com> In a message of Tue, 20 Dec 2005 14:43:59 PST, Michael Spencer writes: >Inquiring minds want to know ;-) > >Michael Ah, a heck of a lot of 'writing reports for the EU' is the short answer ... Laura From tismer at stackless.com Wed Dec 21 11:17:12 2005 From: tismer at stackless.com (Christian Tismer) Date: Wed, 21 Dec 2005 10:17:12 +0000 Subject: [pypy-dev] tb meeting 05-12-22 Message-ID: <43A92BA8.3070307@stackless.com> Hi, I will be unable to attend the meeting tomorrow since I'm flying from Iceland to Germany. If it is possible to move the event to Friday for instance, fine with me. Otherwise here my 3-liner DONE: minimal coroutine layout NEXT: implementing a mixed Stackless module supporting Greenlets and Tasklets, with Richard BLOCK: None cheers - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From ac at strakt.com Wed Dec 21 11:50:24 2005 From: ac at strakt.com (=?ISO-8859-1?Q?Anders_Chrigstr=F6m?=) Date: Wed, 21 Dec 2005 11:50:24 +0100 Subject: [pypy-dev] PyPy-sync meetings Message-ID: <43A93370.3040809@strakt.com> As i am going to have vacation from 22/12 to 8/1 I will most likely not attend any pypy-sync meeting until the one 12/1. My three lines until then: DONE: performance improvement work. NEXT: Vacation BLOCKERS: None /Arre From eric at vanrietpaap.nl Wed Dec 21 12:08:15 2005 From: eric at vanrietpaap.nl (Eric van Riet Paap) Date: Wed, 21 Dec 2005 12:08:15 +0100 Subject: [pypy-dev] pypy-sync dec 22nd announcement Message-ID: <1B9FD098-062C-49FD-AC85-B0652BD06FF1@vanrietpaap.nl> hello everyone, Tomorrows pypy-sync will probably be a short one since some people will already be enjoying their well earned holiday. Meeting topic so far: * activity reports * determine blockers (none probably because most developers will be not working next week (officially)) * anything that pops to mind * the usual bye's, this time with merry christmas-es. If there is anything you want to see covered during this meeting please post it to pypy-dev or to me. Time and place as always. I would be very surprised if you don't know where to find us by now. :) cheers Eric From mwh at python.net Wed Dec 21 14:03:24 2005 From: mwh at python.net (Michael Hudson) Date: Wed, 21 Dec 2005 13:03:24 +0000 Subject: [pypy-dev] Re: What happened...this week in PyPy? References: Message-ID: <2mk6dy38mb.fsf@starship.python.net> Michael Spencer writes: > Inquiring minds want to know ;-) Eh, I haven't finished it yet, sorry. It's about 75% written... Glad to know it's appreciated! Cheers, mwh -- C is not clean -- the language has _many_ gotchas and traps, and although its semantics are _simple_ in some sense, it is not any cleaner than the assembly-language design it is based on. -- Erik Naggum, comp.lang.lisp From mwh at python.net Wed Dec 21 16:39:46 2005 From: mwh at python.net (Michael Hudson) Date: Wed, 21 Dec 2005 15:39:46 +0000 Subject: [pypy-dev] This Week in PyPy 7 Message-ID: <2md5jq31dp.fsf@starship.python.net> Better late than never, eh? Introduction ============ This is the seventh summary of what's been going on in the world of PyPy in the last week. I'd still like to remind people that when something worth summarizing happens to recommend if for "This Week in PyPy" as mentioned on: http://codespeak.net/pypy/dist/pypy/doc/weekly/ where you can also find old summaries. There were about 110 commits to the pypy section of codespeak's repository in the last week. The Sprint! =========== The last weekly summary was written towards the end of the sprint. The things we did in the couple of remaining days were written up in the second sprint report: http://codespeak.net/pipermail/pypy-dev/2005q4/002660.html Apart from continuing our work from the first half of the sprint, the main new work was implementing __del__ support in the translated PyPy. IRC Summary =========== Thanks again to Pieter for this. **Monday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051212:: [00:26] Stakkars says that it is great that pypy does not punish you for indirection. He is of meaning that he writes better style in RPython than in Python, because the "it is slow" aspect is gone. **Tuesday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051213:: [21:01] Heatsink says that he is doing some dynamic optimizations in CPython. This turns into a discussion about the nature of pypy, and Arigo takes us on a tour of how pypy and the JIT will interact in the future. A good read of general pypy ideas. **Thursday** http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20051215:: [10:24] Ericvrp discovers an optimization that makes pypy 6.8x slower than CPython on the richards test suite. All if-elses are converted to switches. Cfbolz replies that it is time to write a graph transformation to implement this optimization officially. PyPy's Bytecode Dispatcher ========================== Something that was suggested but never got-around-to at the last sprint was to modify the translation process so that the bytecode dispatch loop of the interpreter used a C switch rather than a table of function pointers. The bytecode implementation code in PyPy builds a list of functions that contain the implementation of the respective bytecode. Up until a few days ago, the dispatch function retrieved the correct function by using the bytecode as an index into this list. This was turned by the translator and the C backend into an array of function pointers. This has the drawback that the bytecode-implementing functions can never be inlined (even though some of them are quite small) and there always is a read from memory for every bytecode executed. During the Gothenburg sprint we discussed and a strategy to transform the dispatch code into something more efficient and in the last week Eric, Arre and Carl Friedrich implemented this strategy. Now the dispatching is done by a huge (automatically generated, of course) chain of if/elif/else that all test the value of the same variable. In addition there is a transformation that transforms chains of such if/elif/else blocks into a block that has an integer variable as an exitswitch and links with exitcases corresponding to the different values of the single integer variable. The C backend converts such a block into a switch. In addition this technique makes it possible for our inliner to inline some of the bytecode implementing functions work. Using the new dispatcher pypy-c got 10% or so faster (though the *first* time we ran it it was much much faster! Benchmarking is hard). Preparations for EU-review still ongoing =========================================== Many developers are still involved in preparations for the EU review on 20th January. Reports are being finalized and there are discussions about various issues that are only indirectly related to the development efforts (in so far as it provides the basis for the partial funding we receive). We probably will only know on the 20th if everything works out suitably. -- This makes it possible to pass complex object hierarchies to a C coder who thinks computer science has made no worthwhile advancements since the invention of the pointer. -- Gordon McMillan, 30 Jul 1998 From mahs at telcopartners.com Wed Dec 21 22:29:05 2005 From: mahs at telcopartners.com (Michael Spencer) Date: Wed, 21 Dec 2005 13:29:05 -0800 Subject: [pypy-dev] Re: What happened...this week in PyPy? In-Reply-To: <2mk6dy38mb.fsf@starship.python.net> References: <2mk6dy38mb.fsf@starship.python.net> Message-ID: Michael Hudson wrote: > Michael Spencer writes: > >> Inquiring minds want to know ;-) > > Eh, I haven't finished it yet, sorry. It's about 75% written... > > Glad to know it's appreciated! > > Cheers, > mwh > They are appreciated indeed, at least by me: thanks for the latest Michael From bea at netg.se Thu Dec 22 02:15:03 2005 From: bea at netg.se (Beatrice During) Date: Thu, 22 Dec 2005 02:15:03 +0100 (CET) Subject: [pypy-dev] Mallorca sprint announcement Message-ID: Hi there The PyPy team is happy to invite you to the first PyPy sprint of 2006: Palma de Mallorca PyPy Sprint: 23rd - 29th January 2006 ============================================================ The next PyPy sprint is scheduled to take place January 2006 in Palma De Mallorca, Balearic Isles, Spain. We'll give newcomer-friendly introductions and the focus will mainly be on current JIT work, garbage collection, alternative threading models, logic programming and on improving the interface with external functions. To learn more about the new Python-in-Python implementation look here: http://codespeak.net/pypy Goals and topics of the sprint ------------------------------ In Gothenburg we have made some first forays into the interesting topics of Just-in-Time compilation. In Mallorca we will continue that and have the following ideas: - Further work/experimentation toward Just-In-Time Compiler generation, which was initiated with the Abstract Interpreter started in Gothenburg. - Integrating our garbage collection toolkit with the backends and the code generation. - Heading into the direction of adding logic programming to PyPy. - Optimization work: our threading implementation is still incredibly slow, we need to work on that. Furthermore there are still quite some slow places in the interpreter that could be improved. - getting the socket module to a more complete state (it is already improved but still far from complete) - generally improving the way we interface with external functions. - whatever participants want to do with PyPy (please send suggestions to the mailing list before to allow us to plan and give feedback) Location & Accomodation ------------------------ The sprint will be held at the Palma University (UIB - Universitat de les Illes Balears), in their GNU/Linux lab (http://mnm.uib.es/phpwiki/AulaLinux). We are hosted by the Computer Science department and Ricardo Galli is our contact person there, helping with arranging facilities. The University is located 7 km away from the central Palma. Busses to the University departs from "Plaza de Espa?a" (which is a very central location in Palma). Take bus 19 to the UIB campus. A ticket for one urban trip costs 1 euro. You can also buy a card that is valid for 10 trips and costs 7.51 euros. Information about bus timetables and routes can be found on: http://www.a-palma.es A map over the UIB campus are can be found on: http://www.uib.es/imagenes/planoCampus.html The actual address is: 3r pis de l'Anselm Turmeda which can be found on the UIB Campus map. At "Plaza de Espa?a" there is a hostel (Hostal Residencia Terminus) which has been recommended to us. It's cheap (ca 50 euros/double room with bathroom). Some more links to accomodations (flats, student homes and hotels): http://www.lodging-in-spain.com/hotel/town/Islas_Baleares,Mallorca,Palma_de_Mallorca,1/ http://www.uib.es/fuguib/residencia/english/index.html http://www.homelidays.com/EN-Holidays-Rental/110_Search/SearchList.asp?DESTINATION=Palma%20de%20Mallorca&ADR_PAYS=ES&ADR_ LOCALISATION=ES%20ISLASBALEARES%20MALLORCA If you want to find a given street, you can search here: http://www.callejeando.com/Pueblos/pueblo7_1.htm To get to Palma De Mallorca almost all low fare airlines and travel agencies have cheap tickets to get there. Information about Mallorca and Palma (maps, tourist information, local transports, recommended air lines, ferries and much more) can be found on: http://www.palmademallorca.es/portalPalma/home.jsp Comments on the weather: In January it is cold and wet on Mallorca Average temperature: 8,4 degrees Celsius Lowest temperature: 2 degrees Celsius Highest temperature: 14,5 degrees Celsius Average humidity rate: 77,6 % So more time for coding and less time for sunbathing and beaches ;-) Exact times ----------- The public PyPy sprint is held Monday 23rd - Sunday 29th January 2006. Hours will be from 10:00 until people have had enough. It's a good idea to arrive a day before the sprint starts and leave a day later. In the middle of the sprint there usually is a break day and it's usually ok to take half-days off if you feel like it. For this particular break day, Thursday, we are invited to the studio of Gin?s Qui?onero, a local artist and painter. Gin?s have also been the person helping us getting connections to UIB and providing much appreciated help regarding accommodation and other logistical information. For those of you interested - here is his website where there also are paintings showing his studio: http://www.hermetex4.com/damnans/ For those interested in playing collectable card games, this will also be an opportunity to get aquainted with V:TES which will be demoed by Gin?s and Beatrice and Sten D?ring. For more information on this cardgame - see: http://www.white-wolf.com/vtes/index.php. (The Mallorca sprint was organized through contacts within the V:TES community). Network, Food, currency ------------------------ Currency is Euro. Food is available in the UIB Campus area as well as cheap restaurants in Palma. You normally need a wireless network card to access the network, but we can provide a wireless/ethernet bridge. 230V AC plugs are used in Mallorca. Registration etc.pp. -------------------- Please subscribe to the `PyPy sprint mailing list`_, introduce yourself and post a note that you want to come. Feel free to ask any questions there! There also is a separate `Mallorca people`_ page tracking who is already thought to come. If you have commit rights on codespeak then you can modify yourself a checkout of http://codespeak.net/svn/pypy/extradoc/sprintinfo/mallorca-2006/people.txt .. _`PyPy sprint mailing list`: http://codespeak.net/mailman/listinfo/pypy-sprint .. _`Mallorca people`: http://codespeak.net/pypy/extradoc/sprintinfo/mallorca-2006/people.html From cfbolz at gmx.de Thu Dec 22 02:23:06 2005 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Thu, 22 Dec 2005 02:23:06 +0100 Subject: [pypy-dev] Mallorca sprint announcement In-Reply-To: References: Message-ID: <43A9FFFA.8030609@gmx.de> Hi! Beatrice During wrote: > Hi there > > The PyPy team is happy to invite you to the first PyPy sprint of 2006: ... be careful when replying, bea sent to both pypy-funding and pypy-dev. Cheers, Carl Friedrich From mwh at python.net Fri Dec 23 11:42:22 2005 From: mwh at python.net (Michael Hudson) Date: Fri, 23 Dec 2005 10:42:22 +0000 Subject: [pypy-dev] This Week in PyPy is on holiday Message-ID: <2mslsk14dt.fsf@starship.python.net> Last week we wrote reports. Next week is Christmas and I fully expect not much to happen. I'll try to write one on the 6th, but as I'll have got back from holiday the day before, I'll need some help :) Cheers, mwh -- I located the link but haven't bothered to re-read the article, preferring to post nonsense to usenet before checking my facts. -- Ben Wolfson, comp.lang.python From lac at strakt.com Thu Dec 29 15:47:39 2005 From: lac at strakt.com (Laura Creighton) Date: Thu, 29 Dec 2005 15:47:39 +0100 Subject: [pypy-dev] OCSCON 2006 in Portland again, paper deadline Feb 13 Message-ID: <200512291447.jBTEleJC023145@theraft.strakt.com> Maybe we should target a talk, not to the Python crowd, but to the 'not specific to any language' track? Or do we all plan to be in Japan that week (July 24-28)? I am thinking that we need a calendar on codespeak to track where things are happening and what (if anything) we want to do .... Laura ------- Forwarded Message Return-Path: python-announce-list-bounces at python.org Delivery-Date: Thu Dec 29 15:37:45 2005 From: "Kevin Altis" Newsgroups: comp.lang.python.announce Subject: ANN: OSCON 2006 Call for Proposals OSCON 2006: Opening Innovation http://conferences.oreillynet.com/os2006/ Save the date for the 8th annual O'Reilly Open Source Convention, happening July 24-28, 2006 at the Oregon Convention Center in beautiful Portland, Oregon. Call For Participation - ---------------------- Submit a proposal-fill out the form at: http://conferences.oreillynet.com/cs/os2006/create/e_sess/ Important Dates: * Proposals Due: Midnight (PST) February 13, 2006 * Speaker Notification: March 27, 2006 * Tutorial Presentation Files Due: June 12, 2006 * Session Presentation Files Due: June 26, 2006 * Conference: July 24-28, 2006 Proposals - --------- We are considering proposals for 45 minute sessions and 3 hour tutorials. We rarely accept 90 minute proposals, as most general sessions are 45 minutes in length. Your proposals are examined by a committee which draws from them and which also solicits proposals to build the program. Proposals are due by midnight (PST), Feb. 13, 2006. The OSCON Speaker Manager, Vee McMillen, emails notification of the status of your talk (accepted or otherwise) by March 27, 2006. Unless the content of your talk is particularly timely (e.g., features of a product that will be launched at OSCON), you are required to send us your slides several weeks before the conference begins. Submit proposals via the form below. Some tips for writing a good proposal for a good talk: * Keep it free of marketing: talk about open source software, but not about a commercial product--the audience should be able to use and improve the things you talk about without paying money * Keep the audience in mind: they're technical, professional, and already pretty smart. * Clearly identify the level of the talk: is it for beginners to the topic, or for gurus? What knowledge should people have when they come to the talk? * Give it a simple and straightforward title: fancy and clever titles make it harder for people (committee and attendees) to figure out what you're really talking about * Limit the scope of the talk: in 45 minutes, you won't be able to cover Everything about Widget Framework X. Instead, pick a useful aspect, or a particular technique, or walk through a simple program. * Pages of code are unreadable: mere mortals can deal with code a line at a time. Sometimes three lines at a time. A page of code can't be read when it's projected, and can't be comprehended by the audience. * Explain why people will want to attend: is the framework gaining traction? Is the app critical to modern systems? Will they learn how to deploy it, program it, or just what it is? * Let us know in your proposal notes whether you can give all the talks you submitted proposals for * Explain what you will cover in the talk NOTE: All presenters whose talks are accepted (excluding Lightning Talks) will receive free registration at the conference. For each half-day tutorial, the presenter receives one night's accommodation, a limited travel allowance, and an honorarium. We give tutors and speakers registration to the convention (including tutorials), and tutors are eligible for a travel allowance: up to US$300 from the west coast of the USA, up to US$500 from the east coast of the USA, up to US$800 from outside the USA. Registration opens April, 2006. If you would like to be notified by email when registration opens, please use the form on our main page. CONFERENCE INFO =============== The O'Reilly Open Source Convention is where coders, sysadmins, entrepreneurs, and business people working in free and open source software gather to share ideas, discover code, and find solutions. At OSCON 2005, more than 2,400 attendees took part in 241 sessions and tutorials across eleven technology tracks, learning about the newest features and versions from creators and experts. A record number of products launches and announcements were made, and sponsors and exhibitors from a wide range of companies filled the largest exhibit hall in OSCON's history. We anticipate that OSCON 2006 will be even more successful, and continue to be the place for the open source community to meet up, debate, make deals, and connect face to face. OSCON 2006 will take place at the Oregon Convention Center in Portland, Oregon July 24-28, 2006. OSCON 2006 will feature the projects, technologies, and skills that you need to write and deploy killer modern apps. We're looking for proposals on platforms and applications around: * Multimedia including voice (VoIP) and video * AI including spam-busting, classification, clustering, and data mining * Collaboration including email, calendars, RSS, OPML, mashups, IM, presence, and session initialization * Project best practices including governance, starting a project, and managing communities * Microsoft Windows-based open source projects including .NET, Mono, and regular C/C++/Visual Basic Windows apps * Enterprise Java techniques including integration, testing, and scalable deployment solutions * Linux kernel skills for sysadmins including virtualization, tuning, and device drivers * Device hacking including iPods, Nintendo, PSP, XBox 360, and beyond * Design including CSS, GUI, and user experience (XP) * Entrepreneurial topics including management for techies, how to go into business for yourself, and business models that work * Security including hardening, hacking, root kits (Sony and otherwise), and intrusion detection/cleanup * Fun subjects with no immediate commercial application including retro computing, games, and BitTorrent Tracks at OSCON will include: * Desktop Apps * Databases, including MySQL, PostgreSQL, Ingres, and others * Emerging Topics * Java * Linux Kernel for SysAdmins * Linux for Programmers * Perl, celebrating the 10th year of The Perl Conference! * PHP * Programming, including everything that's not specific to a particular language * Python * Security * Ruby, including Ruby on Rails * Web Apps, including Apache * Windows - -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations.html ------- End of Forwarded Message From michael.drumheller at boeing.com Thu Dec 29 19:06:24 2005 From: michael.drumheller at boeing.com (Drumheller, Michael) Date: Thu, 29 Dec 2005 10:06:24 -0800 Subject: [pypy-dev] Thread/gil/java question Message-ID: <716621DCB4468F46BBCC1BCFBED45C120129DA6F@XCH-NW-2V2.nw.nos.boeing.com> I am familiar with the GIL limitation on Python concurrency, e.g., <> Does anyone know whether Java has a similar limitation? (Yes: I have no experience with Java at all.) Thanks, MD From pedronis at strakt.com Thu Dec 29 19:39:48 2005 From: pedronis at strakt.com (Samuele Pedroni) Date: Thu, 29 Dec 2005 19:39:48 +0100 Subject: [pypy-dev] Thread/gil/java question In-Reply-To: <716621DCB4468F46BBCC1BCFBED45C120129DA6F@XCH-NW-2V2.nw.nos.boeing.com> References: <716621DCB4468F46BBCC1BCFBED45C120129DA6F@XCH-NW-2V2.nw.nos.boeing.com> Message-ID: <43B42D74.70406@strakt.com> Drumheller, Michael wrote: > I am familiar with the GIL limitation on Python concurrency, e.g., > < oncurrency>> > > Does anyone know whether Java has a similar limitation? (Yes: I have no > experience with Java at all.) > notice that the GIL is an implementation detail of CPython, not part of Python semantics. Java implementations don't have global locks (usually) in CPython GIL sense, also Java builtin datatypes and bytecode granularity is such that not much could be assumed for user programs even in the presence of such a lock. Whether the Java specification would forbid such a lock is a different matter. Jython also doesn't have a GIL. It is still true that because of the GIL in CPython that Python multithreading programs usually make assumptions about some operations on/involving only builtin types and no callbacks to user code to be atomic. Jython uses locks but no single global lock to try to respect these de-facto semantics. From michael.drumheller at boeing.com Thu Dec 29 20:09:12 2005 From: michael.drumheller at boeing.com (Drumheller, Michael) Date: Thu, 29 Dec 2005 11:09:12 -0800 Subject: [pypy-dev] Thread/gil/java question Message-ID: <716621DCB4468F46BBCC1BCFBED45C120129DA70@XCH-NW-2V2.nw.nos.boeing.com> Thank you for the quick response. Let me get this straight: If I use Jython instead of CPython, then my Python threads can migrate to different CPUs? Not sure what you meant by "... granularity is such that not much could be assumed for user programs..."--could you clarify that a bit? Thank you again. Mike D. -----Original Message----- From: Samuele Pedroni [mailto:pedronis at strakt.com] Sent: Thursday, December 29, 2005 10:40 AM To: Drumheller, Michael Cc: pypy-dev at codespeak.net Subject: Re: [pypy-dev] Thread/gil/java question Drumheller, Michael wrote: > I am familiar with the GIL limitation on Python concurrency, e.g., > < #c > oncurrency>> > > Does anyone know whether Java has a similar limitation? (Yes: I have > no experience with Java at all.) > notice that the GIL is an implementation detail of CPython, not part of Python semantics. Java implementations don't have global locks (usually) in CPython GIL sense, also Java builtin datatypes and bytecode granularity is such that not much could be assumed for user programs even in the presence of such a lock. Whether the Java specification would forbid such a lock is a different matter. Jython also doesn't have a GIL. It is still true that because of the GIL in CPython that Python multithreading programs usually make assumptions about some operations on/involving only builtin types and no callbacks to user code to be atomic. Jython uses locks but no single global lock to try to respect these de-facto semantics. From bea at netg.se Fri Dec 30 00:16:09 2005 From: bea at netg.se (Beatrice During) Date: Fri, 30 Dec 2005 00:16:09 +0100 (CET) Subject: [pypy-dev] OCSCON 2006 in Portland again, paper deadline Feb 13 In-Reply-To: <200512291447.jBTEleJC023145@theraft.strakt.com> References: <200512291447.jBTEleJC023145@theraft.strakt.com> Message-ID: Hi there We have such a calendar on codespeak - Michael have informed about it on pypy-funding. You can find it: http://codespeak.net/pypy/dist/pypy/doc/contact.html or directly at: http://pypycal.sabi.net/. About the Japan sprint: we have not yet decided who will go there. The PO has said it is OK, we still need to check which partners do go and who exactly.... There is also the FOSDEM 2006 conference which Holger mentioned to me, it is on the 25-26th of February 2006. Unfortunately it collides with PyCon but maybe some people not going to the US could visit FOSDEM, which is supposedly a really good OSS conference. Their website seems not to work though ;-) http://www.fosdem.org/2006/index/news/fosdem Michael - could you update this? Cheers Bea On Thu, 29 Dec 2005, Laura Creighton wrote: > > Maybe we should target a talk, not to the Python crowd, but to the > 'not specific to any language' track? Or do we all plan to be > in Japan that week (July 24-28)? I am thinking that we need a > calendar on codespeak to track where things are happening and > what (if anything) we want to do .... > > Laura > > ------- Forwarded Message > > Return-Path: python-announce-list-bounces at python.org > Delivery-Date: Thu Dec 29 15:37:45 2005 > From: "Kevin Altis" > Newsgroups: comp.lang.python.announce > Subject: ANN: OSCON 2006 Call for Proposals > > OSCON 2006: Opening Innovation > http://conferences.oreillynet.com/os2006/ > > Save the date for the 8th annual O'Reilly Open Source Convention, happening > July 24-28, 2006 at the Oregon Convention Center in beautiful Portland, > Oregon. > > > > Call For Participation > - ---------------------- > > Submit a proposal-fill out the form at: > > http://conferences.oreillynet.com/cs/os2006/create/e_sess/ > > Important Dates: > > * Proposals Due: Midnight (PST) February 13, 2006 > * Speaker Notification: March 27, 2006 > * Tutorial Presentation Files Due: June 12, 2006 > * Session Presentation Files Due: June 26, 2006 > * Conference: July 24-28, 2006 > > Proposals > - --------- > > We are considering proposals for 45 minute sessions and 3 hour tutorials. > We rarely accept 90 minute proposals, as most general sessions are 45 > minutes in length. Your proposals are examined by a committee which draws > from them and which also solicits proposals to build the program. Proposals > are due by midnight (PST), Feb. 13, 2006. The OSCON Speaker Manager, Vee > McMillen, emails notification of the status of your talk (accepted or > otherwise) by March 27, 2006. Unless the content of your talk is > particularly timely (e.g., features of a product that will be launched at > OSCON), you are required to send us your slides several weeks before the > conference begins. Submit proposals via the form below. > > Some tips for writing a good proposal for a good talk: > > * Keep it free of marketing: talk about open source software, but not about > a commercial product--the audience should be able to use and improve the > things you talk about without paying money > * Keep the audience in mind: they're technical, professional, and already > pretty smart. > * Clearly identify the level of the talk: is it for beginners to the topic, > or for gurus? What knowledge should people have when they come to the talk? > * Give it a simple and straightforward title: fancy and clever titles make > it harder for people (committee and attendees) to figure out what you're > really talking about > * Limit the scope of the talk: in 45 minutes, you won't be able to cover > Everything about Widget Framework X. Instead, pick a useful aspect, or a > particular technique, or walk through a simple program. > * Pages of code are unreadable: mere mortals can deal with code a line at a > time. Sometimes three lines at a time. A page of code can't be read when > it's projected, and can't be comprehended by the audience. > * Explain why people will want to attend: is the framework gaining traction? > Is the app critical to modern systems? Will they learn how to deploy it, > program it, or just what it is? > * Let us know in your proposal notes whether you can give all the talks you > submitted proposals for > * Explain what you will cover in the talk > > NOTE: All presenters whose talks are accepted (excluding Lightning Talks) > will receive free registration at the conference. For each half-day > tutorial, the presenter receives one night's accommodation, a limited travel > allowance, and an honorarium. We give tutors and speakers registration to > the convention (including tutorials), and tutors are eligible for a travel > allowance: up to US$300 from the west coast of the USA, up to US$500 from > the east coast of the USA, up to US$800 from outside the USA. > > Registration opens April, 2006. If you would like to be notified by email > when registration opens, please use the form on our main page. > > > > CONFERENCE INFO > =============== > > The O'Reilly Open Source Convention is where coders, sysadmins, > entrepreneurs, and business people working in free and open source software > gather to share ideas, discover code, and find solutions. At OSCON 2005, > more than 2,400 attendees took part in 241 sessions and tutorials across > eleven technology tracks, learning about the newest features and versions > from creators and experts. A record number of products launches and > announcements were made, and sponsors and exhibitors from a wide range of > companies filled the largest exhibit hall in OSCON's history. We anticipate > that OSCON 2006 will be even more successful, and continue to be the place > for the open source community to meet up, debate, make deals, and connect > face to face. OSCON 2006 will take place at the Oregon Convention Center in > Portland, Oregon July 24-28, 2006. > > OSCON 2006 will feature the projects, technologies, and skills that you need > to write and deploy killer modern apps. We're looking for proposals on > platforms and applications around: > > * Multimedia including voice (VoIP) and video > * AI including spam-busting, classification, clustering, and data mining > * Collaboration including email, calendars, RSS, OPML, mashups, IM, > presence, and session initialization > * Project best practices including governance, starting a project, and > managing communities > * Microsoft Windows-based open source projects including .NET, Mono, and > regular C/C++/Visual Basic Windows apps > * Enterprise Java techniques including integration, testing, and scalable > deployment solutions > * Linux kernel skills for sysadmins including virtualization, tuning, and > device drivers > * Device hacking including iPods, Nintendo, PSP, XBox 360, and beyond > * Design including CSS, GUI, and user experience (XP) > * Entrepreneurial topics including management for techies, how to go into > business for yourself, and business models that work > * Security including hardening, hacking, root kits (Sony and otherwise), and > intrusion detection/cleanup > * Fun subjects with no immediate commercial application including retro > computing, games, and BitTorrent > > Tracks at OSCON will include: > > * Desktop Apps > * Databases, including MySQL, PostgreSQL, Ingres, and others > * Emerging Topics > * Java > * Linux Kernel for SysAdmins > * Linux for Programmers > * Perl, celebrating the 10th year of The Perl Conference! > * PHP > * Programming, including everything that's not specific to a particular > language > * Python > * Security > * Ruby, including Ruby on Rails > * Web Apps, including Apache > * Windows > > - -- > http://mail.python.org/mailman/listinfo/python-announce-list > > Support the Python Software Foundation: > http://www.python.org/psf/donations.html > > ------- End of Forwarded Message > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > Beatrice D?ring Change Maker Tel: 031- 7750940 J?rntorget 3 Mobil: 0734- 22 89 06 413 04 G?teborg E-post: bea at changemaker.nu www.changemaker.nu "Alla dessa m?sten och alldaglighet. Allt detta som binder v?r verklighet i bojor av skam och rep utav tv?ng. Alla dessa kedjor som binder v?r s?ng. Jag skall slita dem alla i sm?, sm? stycken och m?jligtvis av resterna g?ra mig smycken." - hemlig From lac at strakt.com Fri Dec 30 00:31:20 2005 From: lac at strakt.com (Laura Creighton) Date: Fri, 30 Dec 2005 00:31:20 +0100 Subject: [pypy-dev] OCSCON 2006 in Portland again, paper deadline Feb 13 In-Reply-To: Message from Beatrice During of "Fri, 30 Dec 2005 00:16:09 +0100." References: <200512291447.jBTEleJC023145@theraft.strakt.com> Message-ID: <200512292331.jBTNVK4u009403@theraft.strakt.com> In a message of Fri, 30 Dec 2005 00:16:09 +0100, Beatrice During writes: >Hi there > >We have such a calendar on codespeak - Michael have informed about it on >pypy-funding. You can find it: > >http://codespeak.net/pypy/dist/pypy/doc/contact.html > >or directly at: > >http://pypycal.sabi.net/. What program is this? how to add a new entry ... ie OSCON ... is not obvious for this bear of very little brain. What doc need I read? I still think a nice link on codespeak would make sense rather than having to store a bookmark, which will only work on a machine where you have stored things ... Laura From bea at netg.se Fri Dec 30 00:39:50 2005 From: bea at netg.se (Beatrice During) Date: Fri, 30 Dec 2005 00:39:50 +0100 (CET) Subject: [pypy-dev] OCSCON 2006 in Portland again, paper deadline Feb 13 In-Reply-To: <200512292331.jBTNVK4u009403@theraft.strakt.com> References: <200512291447.jBTEleJC023145@theraft.strakt.com> <200512292331.jBTNVK4u009403@theraft.strakt.com> Message-ID: Hi there On Fri, 30 Dec 2005, Laura Creighton wrote: > In a message of Fri, 30 Dec 2005 00:16:09 +0100, Beatrice During writes: >> Hi there >> >> We have such a calendar on codespeak - Michael have informed about it on >> pypy-funding. You can find it: >> >> http://codespeak.net/pypy/dist/pypy/doc/contact.html >> >> or directly at: >> >> http://pypycal.sabi.net/. > > > What program is this? how to add a new entry ... ie OSCON ... is > not obvious for this bear of very little brain. What doc need I > read? I still think a nice link on codespeak would make sense > rather than having to store a bookmark, which will only work on > a machine where you have stored things ... We do have a link from codespeak - from the contact page. Unfortunately Michael is currently the only one who can update so best thing is to email him - or he sees this ;-) He is usually very vigilant in this - but maybe not now because he is hopefully enjoying his vacation ;-) Cheers Bea From tismer at stackless.com Fri Dec 30 00:47:38 2005 From: tismer at stackless.com (Christian Tismer) Date: Fri, 30 Dec 2005 00:47:38 +0100 Subject: [pypy-dev] Thread/gil/java question In-Reply-To: <716621DCB4468F46BBCC1BCFBED45C120129DA70@XCH-NW-2V2.nw.nos.boeing.com> References: <716621DCB4468F46BBCC1BCFBED45C120129DA70@XCH-NW-2V2.nw.nos.boeing.com> Message-ID: <43B4759A.4020308@stackless.com> Michael, you already got a very good answer, since you happened to hit the former maintainer of Jython. > Thank you for the quick response. Let me get this straight: If I use > Jython instead of CPython, then my Python threads can migrate to > different CPUs? > Not sure what you meant by "... granularity is such that not much could > be assumed for user programs..."--could you clarify that a bit? You already admitted that you have no idea about Java. This list is for developing PyPy. Questions beyond this are of course possible, but I think we have one thing in common: Everybody in this list knows to use the literature, and expects everybody else to do the same. Submissions to the list are meant to create new knowldge or to leverage it. Education is not the primary goal, but supporting PyPy development. If I was in the position to do that, I'd ask you to ask your question to yourself, investigate about it and share the gathered knowledge with us, or check it against other opinions. That would be helpful input to the PyPy project. It is ok to ask after you got stuck with some effort tried. Thanks for your support - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From michael.drumheller at boeing.com Fri Dec 30 09:24:14 2005 From: michael.drumheller at boeing.com (Drumheller, Michael) Date: Fri, 30 Dec 2005 00:24:14 -0800 Subject: [pypy-dev] Thread/gil/java question Message-ID: <716621DCB4468F46BBCC1BCFBED45C120129DA73@XCH-NW-2V2.nw.nos.boeing.com> Thank you. I apologize for misusing the list. Michael -----Original Message----- From: Christian Tismer [mailto:tismer at stackless.com] Sent: Thursday, December 29, 2005 3:48 PM To: Drumheller, Michael Cc: Samuele Pedroni; pypy-dev at codespeak.net Subject: Re: [pypy-dev] Thread/gil/java question Michael, you already got a very good answer, since you happened to hit the former maintainer of Jython. > Thank you for the quick response. Let me get this straight: If I use > Jython instead of CPython, then my Python threads can migrate to > different CPUs? > Not sure what you meant by "... granularity is such that not much > could be assumed for user programs..."--could you clarify that a bit? You already admitted that you have no idea about Java. This list is for developing PyPy. Questions beyond this are of course possible, but I think we have one thing in common: Everybody in this list knows to use the literature, and expects everybody else to do the same. Submissions to the list are meant to create new knowldge or to leverage it. Education is not the primary goal, but supporting PyPy development. If I was in the position to do that, I'd ask you to ask your question to yourself, investigate about it and share the gathered knowledge with us, or check it against other opinions. That would be helpful input to the PyPy project. It is ok to ask after you got stuck with some effort tried. Thanks for your support - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9A : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 802 86 56 mobile +49 173 24 18 776 fax +49 30 80 90 57 05 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/