From anto.cuni at gmail.com Fri Apr 1 00:00:53 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 01 Apr 2011 00:00:53 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> Message-ID: <4D94F995.5020509@gmail.com> On 31/03/11 22:05, Andrew Brown wrote: > python double-interpreted: > 78m (did not finish) > pypy-c (with jit) double-interpreted: 41m 34.528s this is interesting. We are beating cpython by more than 2x even in a "worst case" scenario, because interpreters in theory are not a very good target for tracing JITs. However, it's not the first time that we experience this, so it might be that this interpreter/tracing JIT thing is just a legend :-) > translated interpreter no jit: 45s > translated interpreter jit: 7.5s > translated direct to C, gcc -O0 > translate: 0.2s > compile: 0.4s > run: 18.5s > translated direct to C, gcc -O1 > translate: 0.2s > compile: 0.85s > run: 1.28s > translated direct to C, gcc -O2 > translate: 0.2s > compile: 2.0s > run: 1.34s these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than -O1 and -O2. Pretty good, I'd say :-) ciao, anto From anto.cuni at gmail.com Fri Apr 1 00:02:52 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 01 Apr 2011 00:02:52 +0200 Subject: [pypy-dev] The JVM backend and Jython In-Reply-To: References: <4D92D961.3070309@gmail.com> <4D937258.6090500@gmail.com> Message-ID: <4D94FA0C.6080206@gmail.com> On 31/03/11 21:57, Maciej Fijalkowski wrote: >> Ok, so if Ademan tells me that he's not going to work on the ootype-virtualref >> branch, I'll try to finish the work so you can start playing with it. > > Note to frank: this is kind of cool but only needed for the JIT, > otherwise it's a normal reference. well, no. Virtualrefs were introduced for the JIT, but they also need to be supported by normal backends. This is why translation is broken at the moment. It is true that the implementation is straightforward, though (I suppose this is what you meant originally :-)) ciao, Anto From alex.gaynor at gmail.com Fri Apr 1 00:29:57 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 31 Mar 2011 18:29:57 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D94F995.5020509@gmail.com> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> Message-ID: On Thu, Mar 31, 2011 at 6:00 PM, Antonio Cuni wrote: > On 31/03/11 22:05, Andrew Brown wrote: > > > python double-interpreted: > 78m (did not finish) > > pypy-c (with jit) double-interpreted: 41m 34.528s > > this is interesting. We are beating cpython by more than 2x even in a > "worst > case" scenario, because interpreters in theory are not a very good target > for > tracing JITs. > However, it's not the first time that we experience this, so it might be > that > this interpreter/tracing JIT thing is just a legend :-) > > Well the issue with tracing an interpreter is the large number of paths, a brainfuck interpreter has relatively few paths compared to something like a Python VM. > > translated interpreter no jit: 45s > > translated interpreter jit: 7.5s > > translated direct to C, gcc -O0 > > translate: 0.2s > > compile: 0.4s > > run: 18.5s > > translated direct to C, gcc -O1 > > translate: 0.2s > > compile: 0.85s > > run: 1.28s > > translated direct to C, gcc -O2 > > translate: 0.2s > > compile: 2.0s > > run: 1.34s > > these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than > -O1 > and -O2. Pretty good, I'd say :-) > > ciao, > anto > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From szoth at ubertechnique.com Fri Apr 1 02:27:14 2011 From: szoth at ubertechnique.com (Seth de l'Isle) Date: Thu, 31 Mar 2011 16:27:14 -0800 Subject: [pypy-dev] [PATCH] fix executing a file under sandbox by implementing fstat Message-ID: I ran into a problem trying to follow the instructions to run a python file under the sandbox version of pypy-c. See the console log: http://paste.pocoo.org/show/363348 I was following the instructions here: http://codespeak.net/pypy/dist/pypy/doc/sandbox.html I got some help from arigato, ronny and antocuni on IRC (see log below) and they gave me the courage to hack on sandlib.py a little to see if I could get things working. http://www.tismer.com/pypy/irc-logs/pypy/%23pypy.log.20110331 The following patch changes the code so that it tracks the virtual file system nodes that correspond to each virtual file descriptor so that the node.stat() function can be used for fstat the same way it is used for stat. Thanks! diff -r 601862ed288e pypy/translator/sandbox/sandlib.py --- a/pypy/translator/sandbox/sandlib.py Mon Mar 28 13:12:49 2011 +0200 +++ b/pypy/translator/sandbox/sandlib.py Thu Mar 31 16:13:41 2011 -0800 @@ -391,6 +391,7 @@ super(VirtualizedSandboxedProc, self).__init__(*args, **kwds) self.virtual_root = self.build_virtual_root() self.open_fds = {} # {virtual_fd: real_file_object} + self.fd_to_node = {} def build_virtual_root(self): raise NotImplementedError("must be overridden") @@ -425,10 +426,17 @@ def do_ll_os__ll_os_stat(self, vpathname): node = self.get_node(vpathname) return node.stat() + do_ll_os__ll_os_stat.resulttype = s_StatResult do_ll_os__ll_os_lstat = do_ll_os__ll_os_stat + def do_ll_os__ll_os_fstat(self, fd): + node = self.fd_to_node[fd] + return node.stat() + + do_ll_os__ll_os_fstat.resulttype = s_StatResult + def do_ll_os__ll_os_isatty(self, fd): return self.virtual_console_isatty and fd in (0, 1, 2) @@ -452,11 +460,14 @@ raise OSError(errno.EPERM, "write access denied") # all other flags are ignored f = node.open() - return self.allocate_fd(f) + fd = self.allocate_fd(f) + self.fd_to_node[fd] = node + return fd def do_ll_os__ll_os_close(self, fd): f = self.get_file(fd) del self.open_fds[fd] + del self.fd_to_node[fd] f.close() def do_ll_os__ll_os_read(self, fd, size): From fijall at gmail.com Fri Apr 1 06:26:46 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 31 Mar 2011 22:26:46 -0600 Subject: [pypy-dev] The JVM backend and Jython In-Reply-To: <4D94FA0C.6080206@gmail.com> References: <4D92D961.3070309@gmail.com> <4D937258.6090500@gmail.com> <4D94FA0C.6080206@gmail.com> Message-ID: On Thu, Mar 31, 2011 at 4:02 PM, Antonio Cuni wrote: > On 31/03/11 21:57, Maciej Fijalkowski wrote: > >>> Ok, so if Ademan tells me that he's not going to work on the ootype-virtualref >>> branch, I'll try to finish the work so you can start playing with it. >> >> Note to frank: this is kind of cool but only needed for the JIT, >> otherwise it's a normal reference. > > well, no. Virtualrefs were introduced for the JIT, but they also need to be > supported by normal backends. ?This is why translation is broken at the moment. > > It is true that the implementation is straightforward, though (I suppose this > is what you meant originally :-)) Sure. I was mostly saying "the complex part of the implementation for ootype can be ommited if we skip the JIT part". > > ciao, > Anto > From stefan_ml at behnel.de Fri Apr 1 07:59:36 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 01 Apr 2011 07:59:36 +0200 Subject: [pypy-dev] The JVM backend and Jython In-Reply-To: References: Message-ID: fwierzbicki at gmail.com, 30.03.2011 04:40: > I've been thinking about the first steps towards collaboration between > the Jython project and the PyPy project. It looks like it isn't going > to be too long before we are all (CPython, PyPy, IronPython, Jython, > etc) working on a single shared repository for all of our standard > library .py code. On a somewhat related note, the Cython project is pushing towards reimplementing parts of CPython's stdlib C modules in Cython. That would make it easier for other projects to use the implementation in one way or another, rather than having to reimplement and maintain it separately by following C code. http://thread.gmane.org/gmane.comp.python.devel/122273/focus=122716 The advantage for other-than-CPython-Pythons obviously depends on the module. If it's just implemented in C for performance reasons (e.g. itertools etc.), it would likely end up as a Python module with additional static typing, which would make it easy to adapt. If it's using lots of stuff from libc and C I/O, or even from external libraries, the code itself would obviously be less useful, although it would likely still be easier to port changes/fixes. Stefan From anto.cuni at gmail.com Fri Apr 1 11:23:52 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 01 Apr 2011 11:23:52 +0200 Subject: [pypy-dev] [PATCH] fix executing a file under sandbox by implementing fstat In-Reply-To: References: Message-ID: <4D9599A8.8080106@gmail.com> Hello Seth, thank you for your patch! On 01/04/11 02:27, Seth de l'Isle wrote: [cut] > The following patch changes the code so that it tracks the virtual > file system nodes that correspond to each virtual file descriptor so > that the node.stat() function can > be used for fstat the same way it is used for stat. could you please write the corresponding test in test_sandlib.py please? > + def do_ll_os__ll_os_fstat(self, fd): > + node = self.fd_to_node[fd] > + return node.stat() Also, you probably need to handle the case in which we call fstat with a fd which doesn't exist (and write the corresponding test :-)) ciao, Anto From lac at openend.se Sat Apr 2 21:57:15 2011 From: lac at openend.se (Laura Creighton) Date: Sat, 2 Apr 2011 21:57:15 +0200 Subject: [pypy-dev] Europython reservations Message-ID: <201104021957.p32JvFe0007507@theraft.openend.se> Jacob and I just arranged to be staying at Residence Michelangiolo June 19-26 . see: http://www.tripadvisor.com/Hotel_Review-g187895-d581477-Reviews-Residence_Michelangiolo-Florence_Tuscany.html among others for review. At 980 Euros for the whole time, it seems to be substantially cheaper that the conference hotel, and not that far a walk. Other differences -- no included breakfast, but pantry -- refrigerator, counter, and clean up space -- in your room. For people who wanted to make their own breakfast anyway, this is a win. There may be a cook space too. There is internet by wire in each room but no wifi. On the 5th, we should be on a ferry to get us to kayaking vacation in Corsica. see: http://ucpa.se/Aktivitetsprogram-havskajaksafari-Korsika (in Swedish, but maybe it will help you to find the same program from ucpa described in whatever language you prefer.) This leaves the 26-5th to figure out. Vacation in Tuscany is one idea. Or to extend the sprint because 2 days is not enough is another. If so, maybe we could hold it at this place? Seems spacious enough. I thought it would be fun to fill Residence Michelangiolo with friends, at least for the 19-26 so posted this here. Laura From lac at openend.se Mon Apr 4 14:34:44 2011 From: lac at openend.se (Laura Creighton) Date: Mon, 4 Apr 2011 14:34:44 +0200 Subject: Post Easter PyPy Sprint, Göteborg Sweden April 25 -- May 1. Message-ID: <201104041234.p34CYia5004393@theraft.openend.se> he next PyPy sprint will be in Gothenburg, Sweden. It is a public sprint, very suitable for newcomers. We'll focus on making the 1.5 release (if it hasn't already happened) and whatever interests the Sprint attendees. Topics and goals ---------------- The main goal is to polish and release PyPy 1.5, supporting Python 2.7 as well as the last few months' improvements in the JIT (provided that it hasn't already happened). Other topics: - Going over our documentation, and classifying our docs in terms of mouldiness. Deciding what needs writing, and maybe writing it. - Helping people get their code running with PyPy - maybe work on EuroPython Training, and talks - Summer of Code preparation - speed.pypy.org - any other programming task is welcome too -- e.g. tweaking the Python or JavaScript interpreter, Stackless support, and so on. Location -------- The sprint will be held in the apartment of Laura Creighton and Jacob Hall?n which is at G?tabergsgatan 22 in Gothenburg, Sweden. Here is a map_. This is in central Gothenburg. It is between the tram_ stops of Vasaplatsen and Valand, (a distance of 4 blocks) where many lines call -- the 2, 3, 4, 5, 7, 10 and 13. .. _tram: http://www.vasttrafik.se/en/ .. _map: http://bit.ly/grRuQe Probably cheapest and not too far away is to book accomodation at `SGS Veckobostader`_. The `Elite Park Avenyn Hotel`_ is a luxury hotel just a few blocks away. There are scores of hotels a short walk away from the sprint location, suitable for every budget, desire for luxury, and desire for the unusual. You could, for instance, stay on a `boat`_. Options are oo numerous to go into here. Just ask in the mailing list or on the blog. .. _`SGS Veckobostader`: http://www.sgsveckobostader.se/en .. _`Elite Park Avenyn Hotel`: http://www.elite.se/hotell/goteborg/park/ .. _`boat`: http://www.liseberg.se/en/home/Accommodation/Hotel/Hotel-Barken-Viki ng/ Hours will be from 10:00 until people have had enough. It's a good idea to arrive a day before the sprint starts and leave a day later. In the middle of the sprint there usually is a break day and it's usually ok to take half-days off if you feel like it. Good to Know ------------ Sweden is not part of the Euro zone. One SEK (krona in singular, kronor in plural) is roughly 1/10th of a Euro (9.36 SEK to 1 Euro). The venue is central in Gothenburg. There is a large selection of places to get food nearby, from edible-and-cheap to outstanding. We often cook meals together, so let us know if you have any food allergies, dislikes, or special requirements. Sweden uses the same kind of plugs as Germany. 230V AC. The Sprint will be held the week following Easter. This means, as always, that Gothcon_ will be taking place the weekend before (Easter weekend). Gothcon, now in its 35 year, is the largest European game players conference. Some of you may be interested in arriving early for the board games. The conference site is only in Swedish, alas. You don't need to register in advance unless you are planning to host a tournament, (and it's too late for that anyway). .. _Gothcon: http://www.gothcon.se/ Getting Here ------------ If are coming train, you will arrive at the `Central Station`_. It is about 12 blocks to the site from there, or you can take a tram_. There are two airports which are local to G?teborg, `Landvetter`_ (the main one) and `Gothenburg City Airport`_ (where some budget airlines fly). If you arrive at `Landvetter`_ the airport bus stops right downtown at `Elite Park Avenyn Hotel`_ which is the second stop, 4 blocks from the Sprint site, as well as the end of the line, which is the `Central Station`_. If you arrive at `Gothenburg City Airport`_ take the bus to the end of the line. You will be at the `Central Station`_. You can also arrive by ferry_, from either Kiel in Germany or Frederikshavn in Denmark. .. _`Central Station`: http://bit.ly/fON43p .. _`Landvetter`: http://swedavia.se/en/Goteborg/Traveller-information/Traffic-i nformation/ .. _`Gothenburg City Airport`: http://www.goteborgairport.se/eng.asp .. _ferry: http://www.stenaline.nl/en/ferry/ Who's Coming? -------------- If you'd like to come, please let us know when you will be arriving and leaving, as well as letting us know your interests We'll keep a list of `people`_ which we'll update (which you can do so yourself if you have bitbucket pypy commit rights). .. _`people`: https://bitbucket.org/pypy/extradoc/src/tip/sprintinfo/gothenburg- 2011/people.txt From brownan at gmail.com Mon Apr 4 16:12:33 2011 From: brownan at gmail.com (Andrew Brown) Date: Mon, 4 Apr 2011 10:12:33 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> Message-ID: Submitted for everyone's approval, I've written a draft of a pypy tutorial going over everything I learned in writing this example interpreter. https://bitbucket.org/brownan/pypy-tutorial/src See the main document at tutorial.rst I'd love some feedback on it. I've made an effort to keep things accurate yet simple, but if there are any inaccuracies, let me know. Or fork the repo and make the correction yourself =) -Andrew On Thu, Mar 31, 2011 at 6:29 PM, Alex Gaynor wrote: > > > On Thu, Mar 31, 2011 at 6:00 PM, Antonio Cuni wrote: > >> On 31/03/11 22:05, Andrew Brown wrote: >> >> > python double-interpreted: > 78m (did not finish) >> > pypy-c (with jit) double-interpreted: 41m 34.528s >> >> this is interesting. We are beating cpython by more than 2x even in a >> "worst >> case" scenario, because interpreters in theory are not a very good target >> for >> tracing JITs. >> However, it's not the first time that we experience this, so it might be >> that >> this interpreter/tracing JIT thing is just a legend :-) >> >> > Well the issue with tracing an interpreter is the large number of paths, a > brainfuck interpreter has relatively few paths compared to something like a > Python VM. > > > > >> > translated interpreter no jit: 45s >> > translated interpreter jit: 7.5s >> > translated direct to C, gcc -O0 >> > translate: 0.2s >> > compile: 0.4s >> > run: 18.5s >> > translated direct to C, gcc -O1 >> > translate: 0.2s >> > compile: 0.85s >> > run: 1.28s >> > translated direct to C, gcc -O2 >> > translate: 0.2s >> > compile: 2.0s >> > run: 1.34s >> >> these are cool as well. We are 3x faster than gcc -O0 and ~3x slower than >> -O1 >> and -O2. Pretty good, I'd say :-) >> >> ciao, >> anto >> _______________________________________________ >> pypy-dev at codespeak.net >> http://codespeak.net/mailman/listinfo/pypy-dev >> > > Alex > > -- > "I disapprove of what you say, but I will defend to the death your right to > say it." -- Evelyn Beatrice Hall (summarizing Voltaire) > "The people's good is the highest law." -- Cicero > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Mon Apr 4 17:22:01 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 04 Apr 2011 17:22:01 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> Message-ID: <4D99E219.3050308@gmx.de> On 04/04/2011 04:12 PM, Andrew Brown wrote: > Submitted for everyone's approval, I've written a draft of a pypy > tutorial going over everything I learned in writing this example > interpreter. > > https://bitbucket.org/brownan/pypy-tutorial/src > > See the main document at tutorial.rst > > I'd > love some feedback on it. I've made an effort to keep things accurate > yet simple, but if there are any inaccuracies, let me know. Or fork the > repo and make the correction yourself =) Looks very nice! Would you be up to making a guest post out of this on the PyPy blog? Carl Friedrich From arigo at tunes.org Mon Apr 4 17:16:55 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 4 Apr 2011 17:16:55 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> Message-ID: Hi Andrew, On Mon, Apr 4, 2011 at 4:12 PM, Andrew Brown wrote: > Submitted for everyone's approval, I've written a draft of a pypy tutorial > going over everything I learned in writing this example interpreter. > https://bitbucket.org/brownan/pypy-tutorial/src Excellent and, as far as I can tell, very clear too! A bient?t, Armin. From brownan at gmail.com Mon Apr 4 17:43:07 2011 From: brownan at gmail.com (Andrew Brown) Date: Mon, 4 Apr 2011 11:43:07 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D99E219.3050308@gmx.de> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> Message-ID: On Mon, Apr 4, 2011 at 11:22 AM, Carl Friedrich Bolz wrote: > > Looks very nice! Would you be up to making a guest post out of this on > the PyPy blog? > > Sure! What needs to be done to turn it into a blog post and get it posted? I assume there are format considerations, but I'm also open to any content suggestions and feedback before it "goes live". -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Mon Apr 4 18:17:17 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 04 Apr 2011 18:17:17 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> Message-ID: <4D99EF0D.1000507@gmx.de> On 04/04/2011 05:43 PM, Andrew Brown wrote: > On Mon, Apr 4, 2011 at 11:22 AM, Carl Friedrich Bolz > wrote: > > > Looks very nice! Would you be up to making a guest post out of this on > the PyPy blog? > > Sure! What needs to be done to turn it into a blog post and get it > posted? I assume there are format considerations, but I'm also open to > any content suggestions and feedback before it "goes live". I looked again, added two places that could use small fixes. And I updated two links, see my merge request. Apart from that, the blog post would not need many changes. It would need an introductionary line like: "This is a guest post by Andrew Brown. It's a tutorial for how to write an interpreter with PyPy, generating a JIT. It is suitable for beginners and assumes very little knowledge of PyPy." Then we should link to the repo, and replace all file links with links to bitbucket. I can do all that, and post it (tomorrow), if you are fine with that. Carl Friedrich From anto.cuni at gmail.com Mon Apr 4 18:34:45 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 04 Apr 2011 18:34:45 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> Message-ID: <4D99F325.5050605@gmail.com> On 04/04/11 17:43, Andrew Brown wrote: > Sure! What needs to be done to turn it into a blog post and get it posted? I > assume there are format considerations, but I'm also open to any content > suggestions and feedback before it "goes live". Hello Andrew, thanks for the tutorial, it's really well written and easy to read. Two notes: 1) do you know about the existence of rlib.streamio? It's is part of the "RPython standard library" and it allows you to read/write files in a higher level way than file descriptors 2) Maybe the tutorial is a bit too long to fit in just one post; what about splitting it into two parts? (e.g., one until "Adding JIT" and one after). ciao, Anto From brownan at gmail.com Mon Apr 4 18:30:37 2011 From: brownan at gmail.com (Andrew Brown) Date: Mon, 4 Apr 2011 12:30:37 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D99EF0D.1000507@gmx.de> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99EF0D.1000507@gmx.de> Message-ID: Thanks for the feedback. I'll clarify those parts, and I have a few touch-ups of my own. Also, I think I forgot to add my name =) I'm fine with you posting it as you described. An into line like that was just what I had in mind. I'd wait until tomorrow though to see if any other feedback surfaces. On Mon, Apr 4, 2011 at 12:17 PM, Carl Friedrich Bolz wrote: > On 04/04/2011 05:43 PM, Andrew Brown wrote: > >> On Mon, Apr 4, 2011 at 11:22 AM, Carl Friedrich Bolz > > wrote: >> >> >> Looks very nice! Would you be up to making a guest post out of this on >> the PyPy blog? >> >> Sure! What needs to be done to turn it into a blog post and get it >> posted? I assume there are format considerations, but I'm also open to >> any content suggestions and feedback before it "goes live". >> > > I looked again, added two places that could use small fixes. And I updated > two links, see my merge request. Apart from that, the blog post would not > need many changes. It would need an introductionary line like: > > "This is a guest post by Andrew Brown. It's a tutorial for how to write an > interpreter with PyPy, generating a JIT. It is suitable for beginners and > assumes very little knowledge of PyPy." > > Then we should link to the repo, and replace all file links with links to > bitbucket. I can do all that, and post it (tomorrow), if you are fine with > that. > > Carl Friedrich > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brownan at gmail.com Mon Apr 4 19:46:13 2011 From: brownan at gmail.com (Andrew Brown) Date: Mon, 4 Apr 2011 13:46:13 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D99F325.5050605@gmail.com> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> Message-ID: On Mon, Apr 4, 2011 at 12:34 PM, Antonio Cuni wrote: > 1) do you know about the existence of rlib.streamio? It's is part of the > "RPython standard library" and it allows you to read/write files in a > higher > level way than file descriptors > > No, I didn't. That's good to know. I don't think it's worth updating the examples though, so unless you disagree, I'll just add a note about this module's existence. > 2) Maybe the tutorial is a bit too long to fit in just one post; what about > splitting it into two parts? (e.g., one until "Adding JIT" and one after). > > Yes, it is quite long. Carl, feel free to break it up as necessary when you post it. Breaking it up at the "Adding JIT" section seems ideal, since both parts are useful on their own. -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Mon Apr 4 20:12:57 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 04 Apr 2011 20:12:57 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> Message-ID: <4D9A0A29.1000603@gmail.com> On 04/04/11 19:46, Andrew Brown wrote: > 1) do you know about the existence of rlib.streamio? It's is part of the > "RPython standard library" and it allows you to read/write files in a higher > level way than file descriptors > > No, I didn't. That's good to know. I don't think it's worth updating the > examples though, so unless you disagree, I'll just add a note about this > module's existence. sure, I think that for this example, using fd is fine. Btw, in case you want to do more with pypy, having a look to rlib might be a good idea, there is useful stuff there :) ciao, Anto From brownan at gmail.com Mon Apr 4 22:28:51 2011 From: brownan at gmail.com (Andrew Brown) Date: Mon, 4 Apr 2011 16:28:51 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D9A0A29.1000603@gmail.com> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> Message-ID: On Mon, Apr 4, 2011 at 2:12 PM, Antonio Cuni wrote: > sure, I think that for this example, using fd is fine. > > Btw, in case you want to do more with pypy, having a look to rlib might be > a > good idea, there is useful stuff there :) > > Definitely. In any case, I've made some changes, re-worded some things. Carl, I've addressed your suggestions, let me know what you think. I also re-worded a few things in the "Adding JIT" section to make it flow a bit better assuming it will be split up. It may still need some editing though. -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at gmail.com Tue Apr 5 02:22:26 2011 From: fuzzyman at gmail.com (Michael Foord) Date: Tue, 5 Apr 2011 01:22:26 +0100 Subject: [pypy-dev] A Little Bit of Python Episode 17 Message-ID: An interview with Armin, recorded at the Leysin sprint: http://advocacy.python.org/podcasts/littlebit/2011-04-04.mp3 It turned out pretty well I think. All the best, Michael Foord -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From igouy2 at yahoo.com Tue Apr 5 02:23:36 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Mon, 4 Apr 2011 17:23:36 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? Message-ID: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Hi A simple yes / no question. Do you want PyPy to be shown in the benchmarks game or not? Please consider the question amongst yourselves and then let me know. Of course, I'll make up my own mind but at least I'll be able to take your wishes into account. best wishes, Isaac From dimaqq at gmail.com Tue Apr 5 02:46:29 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Mon, 4 Apr 2011 17:46:29 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <602617.13125.qm@web65613.mail.ac4.yahoo.com> References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: On 4 April 2011 17:23, Isaac Gouy wrote: > Hi > > A simple yes / no question. > > Do you want PyPy to be shown in the benchmarks game or not? > > Please consider the question amongst yourselves and then let me know. > > Of course, I'll make up my own mind but at least I'll be able to take your wishes into account. > > best wishes, Isaac > > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > I think it ought to be included, although I have big reservation about some of the benchmarks. Some work and/or discussion on benchmarks would be in order. my 2c. d. From lstask at gmail.com Tue Apr 5 07:25:47 2011 From: lstask at gmail.com (Liam Staskawicz) Date: Mon, 4 Apr 2011 22:25:47 -0700 Subject: [pypy-dev] minimum system requirements, build configuration Message-ID: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Hi - I've been looking at pypy as a possible application runtime in an ARM9 Linux based system. I understand that the in-progress ARM jit backend targets only ARMv7, but I'm still interested in characterizing a translated pypy-c interpreter on this system with regard to CPU and memory usage. Of course I look forward to possible ARMv5 support for the jit in the future, but if I've understood correctly, fully interpreted mode should be supported - please let me know if this is not correct! To that end, are there minimum recommended system requirements for pypy, specifically in terms of memory? As a reference, something like the Mono 'Small Footprint' wiki page applied to pypy would be what I'm looking for: http://www.mono-project.com/Small_footprint On a related note, most of the build config info I've come across seems to revolve around a scratchbox2 build environment targeting maemo - indeed it seems to be the main other 'platform' option in the pypy build script, other than 'host'. Is there any particularly tight coupling between maemo and pypy, or should I hope to be able to set up a generic sbox2 environment for my target and come away with a working build? Are there any other docs or config snippets that detail how to set up a generic cross compile environment for pypy? Thanks for any tips or relevant info! Liam -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.leslie.ttg at gmail.com Tue Apr 5 07:45:21 2011 From: william.leslie.ttg at gmail.com (William ML Leslie) Date: Tue, 5 Apr 2011 15:45:21 +1000 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <602617.13125.qm@web65613.mail.ac4.yahoo.com> References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: On 5 April 2011 10:23, Isaac Gouy wrote: > Hi > > A simple yes / no question. > > Do you want PyPy to be shown in the benchmarks game or not? > > Please consider the question amongst yourselves and then let me know. There seems to have been general confusion here about what the implementations of these benchmarks are supposed to represent. Are they to be representative of idiomatic code, or optimised for a particular implementation? Or something else entirely? Since pypy have generally tried to optimise for and encourage idiomatic python usage, those benchmark implementations that go to great length to use confusing and non-standard performance hacks represent neither the performance of real-world code, nor what one can do with a specific implementation. If the answer is that they are going to be tuned to a particular implementation and that implementation is not going to be ours, we probably *could* live with that: realistically, the sort of code people are applying pypy to occasionally contains performance hacks that are no longer relevant and possibly detrimental. But it does seem to change the meaning of the benchmark, and it would be useful to get some authoritative clarification on this before we consider it. -- William Leslie From arigo at tunes.org Tue Apr 5 09:59:29 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Apr 2011 09:59:29 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <602617.13125.qm@web65613.mail.ac4.yahoo.com> References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: Hi, On Tue, Apr 5, 2011 at 2:23 AM, Isaac Gouy wrote: > A simple yes / no question. > Do you want PyPy to be shown in the benchmarks game or not? Sorry for missing general knowledge of what you mean by "benchmarks game". According to Google, it is probably what I know as the language shootout (shootout.alioth.debian.org). Is it the case? Assuming it is, then my position is pretty much the same as William Leslie: I have little interest if the benchmarks that PyPy runs with are written in completely non-idiomatic ways, super hand-optimized for CPython, for the reasons explained in detail by William. It would be interesting, however, if you accept versions rewritten in "just plain Python". A bient?t, Armin. From cfbolz at gmx.de Tue Apr 5 14:49:38 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 05 Apr 2011 14:49:38 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> Message-ID: <4D9B0FE2.9020003@gmx.de> On 04/04/2011 10:28 PM, Andrew Brown wrote: > On Mon, Apr 4, 2011 at 2:12 PM, Antonio Cuni > wrote: > > sure, I think that for this example, using fd is fine. > > Btw, in case you want to do more with pypy, having a look to rlib > might be a > good idea, there is useful stuff there :) > > Definitely. > > In any case, I've made some changes, re-worded some things. Carl, I've > addressed your suggestions, let me know what you think. > > I also re-worded a few things in the "Adding JIT" section to make it > flow a bit better assuming it will be split up. It may still need some > editing though. Looked good, I just went ahead and posted the first part: http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html Will do the second part tomorrow. Thanks a lot for the tutorial, I think it's really great. Carl Friedrich From renesd at gmail.com Tue Apr 5 14:54:32 2011 From: renesd at gmail.com (=?ISO-8859-1?Q?Ren=E9_Dudfield?=) Date: Tue, 5 Apr 2011 13:54:32 +0100 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D9B0FE2.9020003@gmx.de> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> Message-ID: Nice one! hehe, I like how you managed to avoid the first letter of the second word of the language ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brownan at gmail.com Tue Apr 5 15:40:17 2011 From: brownan at gmail.com (Andrew Brown) Date: Tue, 5 Apr 2011 09:40:17 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> Message-ID: On Tue, Apr 5, 2011 at 8:54 AM, Ren? Dudfield wrote: > hehe, I like how you managed to avoid the first letter of the second word > of the language ;) > =) yeah, I had to think about that a bit. Looked good, I just went ahead and posted the first part: > > > http://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html > > Will do the second part tomorrow. Thanks a lot for the tutorial, I think > it's really great. > Thanks! It looks great up there. I corrected a typo and changed the wording in 2 places. They're not any huge deals, but if you want to edit the post, see my changes here: https://bitbucket.org/brownan/pypy-tutorial/changeset/8cfb3cd72515 (summary: re-inventing -> re-implementing, toochain -> toolchain, and clarified that the mandelbrot program is written in BF) -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Apr 5 15:54:08 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Apr 2011 15:54:08 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> Message-ID: Hi, On Tue, Apr 5, 2011 at 3:40 PM, Andrew Brown wrote: > I corrected a typo and changed the wording in 2 places. They're not any huge > deals, but if you want to edit the post, see my changes here: > https://bitbucket.org/brownan/pypy-tutorial/changeset/8cfb3cd72515 Thanks, applied. I also removed your explicit e-mail address, and replaced it with a link to one of your previous posts on this mailing list, from where people can still get your e-mail if they want --- but at least it's partially filtered against spammers. A bient?t, Armin. From dimaqq at gmail.com Tue Apr 5 16:10:12 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 5 Apr 2011 07:10:12 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: you guessed right. I had to guess too btw :) here's where dicsussions about particular tests are and should go, as there's no single point of contact for shootout: http://alioth.debian.org/forum/forum.php?forum_id=999 I'll try to find my posts about particular benchmarks I had problems with. On 5 April 2011 00:59, Armin Rigo wrote: > Hi, > > On Tue, Apr 5, 2011 at 2:23 AM, Isaac Gouy wrote: >> A simple yes / no question. >> Do you want PyPy to be shown in the benchmarks game or not? > > Sorry for missing general knowledge of what you mean by "benchmarks > game". ?According to Google, it is probably what I know as the > language shootout (shootout.alioth.debian.org). ?Is it the case? > > Assuming it is, then my position is pretty much the same as William > Leslie: I have little interest if the benchmarks that PyPy runs with > are written in completely non-idiomatic ways, super hand-optimized for > CPython, for the reasons explained in detail by William. ?It would be > interesting, however, if you accept versions rewritten in "just plain > Python". > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From lstask at gmail.com Tue Apr 5 17:08:57 2011 From: lstask at gmail.com (Liam Staskawicz) Date: Tue, 5 Apr 2011 08:08:57 -0700 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: Hi David, thanks for the quick response. On Tuesday, April 5, 2011 at 7:18 AM, David Schneider wrote: > Hi Liam, > > On Tue, Apr 5, 2011 at 07:25, Liam Staskawicz wrote: > > Hi - I've been looking at pypy as a possible application runtime in an ARM9 > > Linux based system. I understand that the in-progress ARM jit backend > > targets only ARMv7, but I'm still interested in characterizing a translated > > pypy-c interpreter on this system with regard to CPU and memory usage. Of > > course I look forward to possible ARMv5 support for the jit in the future, > > but if I've understood correctly, fully interpreted mode should be supported > > - please let me know if this is not correct! > > This is correct, the non-jitted version can be cross-translated to > ARM. Although it still only supports the Boehm GC so running PyPy > without the JIT will be slower than CPython. The JIT currently targets > the ARMv7 architecture, we have not tested on an ARMv5 processor yet, > but it would be interesting to see what is missing for compatibility > with ARMv5. I see - non-jitted is perhaps not such an interesting option at the moment then, I suppose. Are the other GC options not available because they rely on the jit, or is it another dependency? I'll try to find some time to look into what's required for ARMv5 support. If you have any thoughts on particularly ARMv7-centric functionality in the current port that would need to be addressed, that would be interesting to know. > > > To that end, are there minimum recommended system requirements for pypy, > > specifically in terms of memory? As a reference, something like the Mono > > 'Small Footprint' wiki page applied to pypy would be what I'm looking > > for: http://www.mono-project.com/Small_footprint > At this point in time we have no official minimum requirements to run > pypy on ARM, but (quoting Armin) probably anything starting from 64MB > should be fine, depending on the application. The BeagleBoard which is > used to develop the ARM backend has 512 MB and pypy works fine on it > so far. OK, good to know - thanks! > > > On a related note, most of the build config info I've come across seems to > > revolve around a scratchbox2 build environment targeting maemo - indeed it > > seems to be the main other 'platform' option in the pypy build script, other > > than 'host'. Is there any particularly tight coupling between maemo and > > pypy, or should I hope to be able to set up a generic sbox2 environment for > > my target and come away with a working build? Are there any other docs or > > config snippets that detail how to set up a generic cross compile > > environment for pypy? > > We used the scratchbox2 because it allows to create an environment > which can cross-compile programs, but also run small programs in a > context similar to the target environment. This is used to gather > information about the target, required for the translation. > For the ARM backend we are using an Ubuntu rootfs with the build-tools > installed and the gcc-arm cross compiler installed on the host. The > translation toolchain only needs to now the name of the sb2 > environment in which to compile and execute the programs. One thing to > note is that the Python interpreter on the host must match the target > system in the sense of 32/64 bit. Besides these points any sb2 > environment could be used to try the translation for a given target. Sounds good - I'll give it a shot. Thanks again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Tue Apr 5 17:14:50 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Apr 2011 17:14:50 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: Hi Isaac, If you're not interested in this discussion and just want a Yes or No answer to precisely the following question: "should PyPy be added to the benchmarks game's site, with no changes whatsoever to any of the Python benchmarks that are there so far?" then the answer is No. We are not interested in the results of PyPy on such skewed benchmarks, and publishing them would only add to the general confusion. Armin From arigo at tunes.org Tue Apr 5 17:54:31 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Apr 2011 17:54:31 +0200 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: Hi, On Tue, Apr 5, 2011 at 5:08 PM, Liam Staskawicz wrote: > This is correct, the non-jitted version can be cross-translated to > ARM. Although it still only supports the Boehm GC I think that you are saying something wrong: "no jit + our own GC + the shadowstack gc root finder" should get you pure ANSI C, and basically run anywhere. Still, even this version is 1.5x to 2x slower than CPython. You should also mention that some work I did recently adds support for the shadowstack gc root finder to the JIT, so that even on ARM having the JIT with a good GC is not far away (even if it's indeed not done yet). > At this point in time we have no official minimum requirements to run > pypy on ARM, but (quoting Armin) probably anything starting from 64MB > should be fine, depending on the application. I'm rather confident than 64MB is enough, but I also guess that 32MB has a chance to work. With only 16MB I bet nothing much works right now (given that e.g. our GC, by default, does not collect before the heap grew to at least 16MB). It may be possible for anyone with enough motivation to improve the situation on 16MB. A bient?t, Armin. From dimaqq at gmail.com Tue Apr 5 21:26:26 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 5 Apr 2011 12:26:26 -0700 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: On 5 April 2011 08:54, Armin Rigo wrote: [snip] > I'm rather confident than 64MB is enough, but I also guess that 32MB > has a chance to work. ?With only 16MB I bet nothing much works right > now (given that e.g. our GC, by default, does not collect before the > heap grew to at least 16MB). ?It may be possible for anyone with > enough motivation to improve the situation on 16MB. Ouch! That's something to do. I'm currently running cpython 2.6 with a bunch of extension in pre-production on a system that has 32MB in total; and another case, a slim long-running process that runs on a big server, cpython footprint only several MB. In the first case, your treshold would create high memory pressure (bad for os caching things), in the 2nd a saw-like memory use and perhaps significant latency spikes. What I'm trying to say, is gc should adapt to run-time behaviour of a particular script, in some cases 16MB heap threshold would impact both user expectation and performance significantly. Put it up on TODO list! d. From dimaqq at gmail.com Tue Apr 5 21:30:39 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 5 Apr 2011 12:30:39 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: It seems most pypy benchmarks are moved from cpython and as a result they fail because a particular module is unavalable. A more pragmatic approach is to go through the list of benchmarks and either accept current python code for pypy or fail on purpose if python code is too un-pypy-like. Later on, hopefully, someone volunteers to rewrite problem benchmarks. This, btw, is almost status quo. how about that? both visibility for pypy and avoidance of worst problems... d. On 5 April 2011 08:14, Armin Rigo wrote: > Hi Isaac, > > If you're not interested in this discussion and just want a Yes or No > answer to precisely the following question: > > "should PyPy be added to the benchmarks game's site, with no changes > whatsoever to any of the Python benchmarks that are there so far?" > > then the answer is No. ?We are not interested in the results of PyPy > on such skewed benchmarks, and publishing them would only add to the > general confusion. > > > Armin > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From arigo at tunes.org Tue Apr 5 21:44:37 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Apr 2011 21:44:37 +0200 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: Hi Dima, On Tue, Apr 5, 2011 at 9:26 PM, Dima Tisnek wrote: > What I'm trying to say, is gc should adapt to run-time behaviour of a > particular script, in some cases 16MB heap threshold would impact both > user expectation and performance significantly. It may impact user expectation and performance, but: User expectation: so far we've considered the "desktop" users, which do not care about 10MB versus 20MB but start to care when it's about 300MB versus 600MB. It's true that other categories of users exist. Sorry for not being able to care for all possible use cases at once :-) Performance: actually this setting of ours -- not collecting before we have at least 16MB of data -- was done to avoid degradation of performance in cases where the Python script has really low memory usage, so the idea "it's consuming 30MB instead of 5MB so it must have a terrible performance" sounds pretty abstract. But again this is assuming a system where 30MB is a small fraction of the total amount of RAM. If you want to care about the use case of, say, systems with 16MB of RAM in total, then feel free to tweak PyPy. I suppose that you'd get the best results by tweaking one of our GCs, or writing a new one. (For example it probably doesn't make much sense to have a nursery at all.) I hope this helps to clarify the issue, Armin From dimaqq at gmail.com Tue Apr 5 21:52:24 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 5 Apr 2011 12:52:24 -0700 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: Oh, I guessed your reasons. Lanugages like python make a lot of garbage, so 16MB will fill up pretty fast as long as the program does something at all. What I mean to say is that there's gotta be a more clever way where gc thresholds depend on e.g. size of working set or rate of new allocations or something yet smarter. d. On 5 April 2011 12:44, Armin Rigo wrote: > Hi Dima, > > On Tue, Apr 5, 2011 at 9:26 PM, Dima Tisnek wrote: >> What I'm trying to say, is gc should adapt to run-time behaviour of a >> particular script, in some cases 16MB heap threshold would impact both >> user expectation and performance significantly. > > It may impact user expectation and performance, but: > > User expectation: so far we've considered the "desktop" users, which > do not care about 10MB versus 20MB but start to care when it's about > 300MB versus 600MB. ?It's true that other categories of users exist. > Sorry for not being able to care for all possible use cases at once > :-) > > Performance: actually this setting of ours -- not collecting before we > have at least 16MB of data -- was done to avoid degradation of > performance in cases where the Python script has really low memory > usage, so the idea "it's consuming 30MB instead of 5MB so it must have > a terrible performance" sounds pretty abstract. ?But again this is > assuming a system where 30MB is a small fraction of the total amount > of RAM. > > If you want to care about the use case of, say, systems with 16MB of > RAM in total, then feel free to tweak PyPy. ?I suppose that you'd get > the best results by tweaking one of our GCs, or writing a new one. > (For example it probably doesn't make much sense to have a nursery at > all.) > > > I hope this helps to clarify the issue, > > Armin > From arigo at tunes.org Tue Apr 5 21:53:18 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Apr 2011 21:53:18 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: Hi Dima, On Tue, Apr 5, 2011 at 9:30 PM, Dima Tisnek wrote: > A more pragmatic approach is to go through the list of benchmarks and > either accept current python code for pypy or fail on purpose if > python code is too un-pypy-like. > > Later on, hopefully, someone volunteers to rewrite problem benchmarks. Several of us already went down that road. However, as I said, the unresolved issue right now is if the maintainer (Isaac?) is willing to accept or not any new, simpler, more portable version of the benchmarks programs. If not, then that's the end of this discussion, I fear. Armin From arigo at tunes.org Tue Apr 5 22:00:23 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 5 Apr 2011 22:00:23 +0200 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: Hi Dima, On Tue, Apr 5, 2011 at 9:52 PM, Dima Tisnek wrote: > What I mean > to say is that there's gotta be a more clever way where gc thresholds > depend on e.g. size of working set or rate of new allocations or > something yet smarter. Yes, it does in PyPy; we do a full collection when the total amount of data reaches 1.82 times (by default) the amount of live data at the end of the previous collection, with additional tweaks to improve performance in various cases -- and one such tweak is to set the minimum threshold to 16MB (actually, now I think it is not fixed to 16MB but it depends on the amount of L2 cache). It was all reached by various benchmarks on various desktop machines, including the minimum threshold. You can see the various thresholds and their defaults at the start of pypy/rpython/memory/gc/minimark.py (and that's only if you are using minimark, our default GC). Of course all the numbers -- and even half of the algorithms -- are going to be bogus if you start to think about very different machines. That's what I meant when I said that there is open work to do, and you or anyone with an interest in the area (and corresponding hardware) is welcome to attack it. A bient?t, Armin From igouy2 at yahoo.com Tue Apr 5 22:38:48 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Tue, 5 Apr 2011 13:38:48 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <151772.71454.qm@web65605.mail.ac4.yahoo.com> --- On Tue, 4/5/11, Armin Rigo wrote: > what I know as the language shootout (shootout.alioth.debian.org).? > Is it the case? The project was renamed back on 20th April 2007 http://c2.com/cgi/wiki?GreatComputerLanguageShootout > Assuming it is, then my position is pretty much the same as > William Leslie: I have little interest if the benchmarks that > PyPy runs with are written in completely non-idiomatic ways, > super hand-optimized for CPython, for the reasons explained in > detail by William.? It would be interesting, however, if you > accept versions rewritten in "just plain Python". I have no objection to programs written in "just plain Python" - just as I have no objection to programs written "in completely non-idiomatic ways, super hand-optimized for CPython". However, unless there was something interesting about them - such as PyPy made them fast - they might be weeded out in the future. In most cases I do have an objection to numpy and to calling C using ctypes - programs have to be more "plain Python" than that. From dimaqq at gmail.com Tue Apr 5 22:51:42 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 5 Apr 2011 13:51:42 -0700 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: Hi Armin, thanks for pointing me in the right direction. If minimark.py docstring is up to date, looks like it will do close to the right thing on arm5/9. Some of the older arms don't have L2 cache, I'm not sure what /proc/cpuinfo reports in this case, perhaps L1 cache size or nothing. It seems if the cache line is missing in cpuinfo, get_L2cache_linux2 would return maxint (ouch memory) or, if kernel prints 0 there, then 0 (ouch cpu), or if -1 gets in the way, default nursery size, which is 256 bytes (ouch cpu again). If anyone tries to run pypy under QEMU in system mode, results might be very very odd! Modern arm processors have L2 cache from 256KB to 1MB, thus the expected minimum threshold is 1MB to 4MB, seems reasonable enough, as importing a decent set of python standard library is around that. I'm sure Liam is pleased :P d. On 5 April 2011 13:00, Armin Rigo wrote: > Hi Dima, > > On Tue, Apr 5, 2011 at 9:52 PM, Dima Tisnek wrote: >> What I mean >> to say is that there's gotta be a more clever way where gc thresholds >> depend on e.g. size of working set or rate of new allocations or >> something yet smarter. > > Yes, it does in PyPy; we do a full collection when the total amount of > data reaches 1.82 times (by default) the amount of live data at the > end of the previous collection, with additional tweaks to improve > performance in various cases -- and one such tweak is to set the > minimum threshold to 16MB (actually, now I think it is not fixed to > 16MB but it depends on the amount of L2 cache). ?It was all reached by > various benchmarks on various desktop machines, including the minimum > threshold. ?You can see the various thresholds and their defaults at > the start of pypy/rpython/memory/gc/minimark.py (and that's only if > you are using minimark, our default GC). > > Of course all the numbers -- and even half of the algorithms -- are > going to be bogus if you start to think about very different machines. > ?That's what I meant when I said that there is open work to do, and > you or anyone with an interest in the area (and corresponding > hardware) is welcome to attack it. > > > A bient?t, > > Armin > From dimaqq at gmail.com Tue Apr 5 22:54:29 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 5 Apr 2011 13:54:29 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <151772.71454.qm@web65605.mail.ac4.yahoo.com> References: <151772.71454.qm@web65605.mail.ac4.yahoo.com> Message-ID: On 5 April 2011 13:38, Isaac Gouy wrote: > > > --- On Tue, 4/5/11, Armin Rigo wrote: > >> what I know as the language shootout (shootout.alioth.debian.org). >> Is it the case? > > > The project was renamed back on 20th April 2007 > > http://c2.com/cgi/wiki?GreatComputerLanguageShootout > > > >> Assuming it is, then my position is pretty much the same as >> William Leslie: I have little interest if the benchmarks that >> PyPy runs with are written in completely non-idiomatic ways, >> super hand-optimized for CPython, for the reasons explained in >> detail by William.? It would be interesting, however, if you >> accept versions rewritten in "just plain Python". > > > I have no objection to programs written in "just plain Python" - just as I have no objection to programs written "in completely non-idiomatic ways, super hand-optimized for CPython". > > However, unless there was something interesting about them - such as PyPy made them fast - they might be weeded out in the future. > > In most cases I do have an objection to numpy and to calling C using ctypes - programs have to be more "plain Python" than that. Perhaps there should be separate categories for "python" and "numpy" for benchmarks where it makes sense? From lstask at gmail.com Tue Apr 5 23:09:55 2011 From: lstask at gmail.com (Liam Staskawicz) Date: Tue, 5 Apr 2011 14:09:55 -0700 Subject: [pypy-dev] minimum system requirements, build configuration In-Reply-To: References: <3B1535B65AE041F0B53FF5F13BF8178D@gmail.com> Message-ID: Indeed - great to see some of the thought going into this area of the project :) If I make any progress on the actual HW I'll be sure to let you know. Liam On Tue, Apr 5, 2011 at 1:51 PM, Dima Tisnek wrote: > Hi Armin, thanks for pointing me in the right direction. > > If minimark.py docstring is up to date, looks like it will do close to > the right thing on arm5/9. > > Some of the older arms don't have L2 cache, I'm not sure what > /proc/cpuinfo reports in this case, perhaps L1 cache size or nothing. > It seems if the cache line is missing in cpuinfo, get_L2cache_linux2 > would return maxint (ouch memory) or, if kernel prints 0 there, then 0 > (ouch cpu), or if -1 gets in the way, default nursery size, which is > 256 bytes (ouch cpu again). If anyone tries to run pypy under QEMU in > system mode, results might be very very odd! > > Modern arm processors have L2 cache from 256KB to 1MB, thus the > expected minimum threshold is 1MB to 4MB, seems reasonable enough, as > importing a decent set of python standard library is around that. > > I'm sure Liam is pleased :P > > d. > > On 5 April 2011 13:00, Armin Rigo wrote: > > Hi Dima, > > > > On Tue, Apr 5, 2011 at 9:52 PM, Dima Tisnek wrote: > >> What I mean > >> to say is that there's gotta be a more clever way where gc thresholds > >> depend on e.g. size of working set or rate of new allocations or > >> something yet smarter. > > > > Yes, it does in PyPy; we do a full collection when the total amount of > > data reaches 1.82 times (by default) the amount of live data at the > > end of the previous collection, with additional tweaks to improve > > performance in various cases -- and one such tweak is to set the > > minimum threshold to 16MB (actually, now I think it is not fixed to > > 16MB but it depends on the amount of L2 cache). It was all reached by > > various benchmarks on various desktop machines, including the minimum > > threshold. You can see the various thresholds and their defaults at > > the start of pypy/rpython/memory/gc/minimark.py (and that's only if > > you are using minimark, our default GC). > > > > Of course all the numbers -- and even half of the algorithms -- are > > going to be bogus if you start to think about very different machines. > > That's what I meant when I said that there is open work to do, and > > you or anyone with an interest in the area (and corresponding > > hardware) is welcome to attack it. > > > > > > A bient?t, > > > > Armin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qbproger at gmail.com Tue Apr 5 23:49:15 2011 From: qbproger at gmail.com (Joe) Date: Tue, 5 Apr 2011 17:49:15 -0400 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <602617.13125.qm@web65613.mail.ac4.yahoo.com> References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: While I spent my Saturday trying to make PyPy look better in the language shootout, I'm leaning towards taking it out. While a comparison between languages may be interesting, maybe having 1 implementation per language in the shootout would work better. Then there is one target to optimize for. PyPy and CPython have very different performance characteristics. I feel as though speed.python.org may be a better venue for comparing python implementations. Since the pypy developers have closer ties to the Python Core developers and it's been stated they'll have influence. It can be made to be fair for all parties involved. Since all parties will likely be python implementations they can all agree on one implementation and use that. Joe On Mon, Apr 4, 2011 at 8:23 PM, Isaac Gouy wrote: > Hi > > A simple yes / no question. > > Do you want PyPy to be shown in the benchmarks game or not? > > Please consider the question amongst yourselves and then let me know. > > Of course, I'll make up my own mind but at least I'll be able to take your wishes into account. > > best wishes, Isaac > > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From barren.yak at gmail.com Tue Apr 5 23:27:36 2011 From: barren.yak at gmail.com (Ryan Baker) Date: Tue, 5 Apr 2011 14:27:36 -0700 Subject: [pypy-dev] A Short Introduction Message-ID: Greetings PyPy-dev! I'm new here on the mailing list (and to PyPy) and thought I would give a short introduction. My name is Ryan and I'm a student at Oregon State University in Computer Science. I'm currently interning at Intel until September. A little technical background: I love Python (surprise!), and have been programming in various languages since I was about 12. I have a large C and Java background while dabbling in others such as LISP, Ruby, Obj-C, and Lua. Most of my coding has been small personal projects or academic work and decided to finally get involved in a community. I was drawn to the PyPy project after I played with if over a weekend and found it to be extremely relevant to my personal interests. I look forward to meeting all of you both on the mailing list and on IRC and I hope that I can put some of my time to good use and help PyPy! On a side note, I usually am very quick to respond to emails so if you ever have a question, idea, or anything you want to bounce off of me, please feel free to shoot me a message. Best Regards, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From barren.yak at gmail.com Wed Apr 6 00:27:55 2011 From: barren.yak at gmail.com (Ryan Baker) Date: Tue, 5 Apr 2011 15:27:55 -0700 Subject: [pypy-dev] =?utf-8?q?A_Short_Introducti=E2=80=8Bon?= Message-ID: I apologize for the additional message, but it seems I forgot to put mention IRC nick (thanks Laura!). I'll be using the nick barren_yak. See you all there. Best Regards, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.devore at gmail.com Wed Apr 6 01:22:21 2011 From: aaron.devore at gmail.com (Aaron DeVore) Date: Tue, 5 Apr 2011 16:22:21 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: On Tue, Apr 5, 2011 at 2:49 PM, Joe wrote: > While I spent my Saturday trying to make PyPy look better in the > language shootout, I'm leaning towards taking it out. ?While a > comparison between languages may be interesting, maybe having 1 > implementation per language in the shootout would work better. ?Then > there is one target to optimize for. ?PyPy and CPython have very > different performance characteristics. > > I feel as though speed.python.org may be a better venue for comparing > python implementations. ?Since the pypy developers have closer ties to > the Python Core developers and it's been stated they'll have > influence. ?It can be made to be fair for all parties involved. ?Since > all parties will likely be python implementations they can all agree > on one implementation and use that. > > Joe I heavily recommend keeping PyPy in the Shootout in some form. Even with the Shootout's flaws, it is nice to have a general idea of how PyPy compares to both CPython and other languages. -Aaron DeVore From fijall at gmail.com Wed Apr 6 12:14:22 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 6 Apr 2011 12:14:22 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <602617.13125.qm@web65613.mail.ac4.yahoo.com> Message-ID: On Wed, Apr 6, 2011 at 1:22 AM, Aaron DeVore wrote: > On Tue, Apr 5, 2011 at 2:49 PM, Joe wrote: >> While I spent my Saturday trying to make PyPy look better in the >> language shootout, I'm leaning towards taking it out. ?While a >> comparison between languages may be interesting, maybe having 1 >> implementation per language in the shootout would work better. ?Then >> there is one target to optimize for. ?PyPy and CPython have very >> different performance characteristics. >> >> I feel as though speed.python.org may be a better venue for comparing >> python implementations. ?Since the pypy developers have closer ties to >> the Python Core developers and it's been stated they'll have >> influence. ?It can be made to be fair for all parties involved. ?Since >> all parties will likely be python implementations they can all agree >> on one implementation and use that. >> >> Joe > > I heavily recommend keeping PyPy in the Shootout in stome form. Even > with the Shootout's flaws, it is nice to have a general idea of how > PyPy compares to both CPython and other languages. We don't get that information now at least, since those benchmarks are badly skewed towards CPython. I know how hard is to find out a reasonable set of benchmarks and how to keep them balanced. I have another issue with ctypes & numpy. This is that C implementations are allowed to use gcc-specific hacks and non-standard libraries (apache malloc for gcbench). Why wouldn't we be allowed to do the same then? I would like to have PyPy included, but I would also like the benchmark game to be "fair" or as close to "fair" as we can get. Having benchmarks with different rules for different languages doesn't seem to be quite fair in my opinion. Note that this is up to discussions - I'm fine with saying "numpy & ctypes are disallowed" or having a separate category "python + numpy" with both CPython and PyPy (once we get numpy). > > -Aaron DeVore > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From igouy2 at yahoo.com Wed Apr 6 15:08:10 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Wed, 6 Apr 2011 06:08:10 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <564429.74157.qm@web65602.mail.ac4.yahoo.com> --- On Wed, 4/6/11, Maciej Fijalkowski wrote: -snip- > We don't get that information now at least, since those > benchmarks are badly skewed towards CPython. I know how > hard is to find out a reasonable set of benchmarks and how to > keep them balanced. Do you mean the program you contributed is badly skewed towards CPython? http://shootout.alioth.debian.org/u32/program.php?test=nbody&lang=pypy&id=1 Do you mean that the n-body problem is badly skewed towards CPython? Your PyPy program is shown as so much faster - how is that "badly skewed towards CPython"? From cfbolz at gmx.de Wed Apr 6 15:13:13 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 06 Apr 2011 15:13:13 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> Message-ID: <4D9C66E9.8020003@gmx.de> On 04/05/2011 03:54 PM, Armin Rigo wrote: > Hi, > > On Tue, Apr 5, 2011 at 3:40 PM, Andrew Brown wrote: >> I corrected a typo and changed the wording in 2 places. They're not any huge >> deals, but if you want to edit the post, see my changes here: >> https://bitbucket.org/brownan/pypy-tutorial/changeset/8cfb3cd72515 > > Thanks, applied. I also removed your explicit e-mail address, and > replaced it with a link to one of your previous posts on this mailing > list, from where people can still get your e-mail if they want --- but > at least it's partially filtered against spammers. Second post is up too: http://bit.ly/fLjGHs Thanks again, Andrew! FWIW, the first post is already on place four of all PyPy blog posts in the ranking of page impressions. Carl Friedrich From igouy2 at yahoo.com Wed Apr 6 15:15:36 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Wed, 6 Apr 2011 06:15:36 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <405881.94798.qm@web65614.mail.ac4.yahoo.com> --- On Wed, 4/6/11, Maciej Fijalkowski wrote: -snip- > I have another issue with ctypes & numpy. This is that > C implementations are allowed to use gcc-specific hacks and > non-standard libraries (apache malloc for gcbench). Why wouldn't > we be allowed to do the same then? Would you describe C as "batteries included" ? From fijall at gmail.com Wed Apr 6 15:31:12 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 6 Apr 2011 15:31:12 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <405881.94798.qm@web65614.mail.ac4.yahoo.com> References: <405881.94798.qm@web65614.mail.ac4.yahoo.com> Message-ID: On Wed, Apr 6, 2011 at 3:15 PM, Isaac Gouy wrote: > > > --- On Wed, 4/6/11, Maciej Fijalkowski wrote: > > -snip- >> I have another issue with ctypes & numpy. This is that >> C implementations are allowed to use gcc-specific hacks and >> non-standard libraries (apache malloc for gcbench). Why wouldn't >> we be allowed to do the same then? > > Would you describe C as "batteries included" ? > nope, I would not. However, ctypes come included in standard python distribution (unlike numpy or gmpy) so this is not really a valid argument. > > > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From fijall at gmail.com Wed Apr 6 15:34:56 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 6 Apr 2011 15:34:56 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <564429.74157.qm@web65602.mail.ac4.yahoo.com> References: <564429.74157.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Wed, Apr 6, 2011 at 3:08 PM, Isaac Gouy wrote: > > > --- On Wed, 4/6/11, Maciej Fijalkowski wrote: > > -snip- >> We don't get that information now at least, since those >> benchmarks are badly skewed towards CPython. I know how >> hard is to find out a reasonable set of benchmarks and how to >> keep them balanced. > > > Do you mean the program you contributed is badly skewed towards CPython? > > http://shootout.alioth.debian.org/u32/program.php?test=nbody&lang=pypy&id=1 > No > > Do you mean that the n-body problem is badly skewed towards CPython? No, that would be nonsense. I would never discuss whether those benchmarks does represent typical workflow in language X because it's impossible to find such a set that's true for every X. I never did discuss the choice of problems. > > Your PyPy program is shown as so much faster - how is that "badly skewed towards CPython"? > That's true, but that's one that got through. For example reverse complement (the current version) is skewed towards CPython. I'm fine with saying that ctypes (or numpy) are not allowed, with a good explanation (and maybe an explanation why custom malloc library is allowed for C and gcbench). Another question which was raised - are programs that only work on PyPy allowed? (Due to pypy's extensions or cpython bugs). Since programs that only compile on GCC clearly are. Cheers, fijal From brownan at gmail.com Wed Apr 6 16:03:44 2011 From: brownan at gmail.com (Andrew Brown) Date: Wed, 6 Apr 2011 10:03:44 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D9C66E9.8020003@gmx.de> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> <4D9C66E9.8020003@gmx.de> Message-ID: Hmm, looks like the line numbers for the JIT trace output are mis-aligned, although it may just be my browser (Chrome beta). Looks fine in Firefox. Oh well. But anyways... On Wed, Apr 6, 2011 at 9:13 AM, Carl Friedrich Bolz wrote: > Thanks again, Andrew! FWIW, the first post is already on place four of > all PyPy blog posts in the ranking of page impressions. > You're welcome! That's awesome to hear, I'm glad I could contribute. Also, Dan, if you wanted to post your version I'm curious to see your approach. -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From cfbolz at gmx.de Wed Apr 6 16:05:02 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 06 Apr 2011 16:05:02 +0200 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> <4D9C66E9.8020003@gmx.de> Message-ID: <4D9C730E.2070904@gmx.de> On 04/06/2011 04:03 PM, Andrew Brown wrote: > Hmm, looks like the line numbers for the JIT trace output are > mis-aligned, although it may just be my browser (Chrome beta). Looks > fine in Firefox. Oh well. Can you re-load? I tried to fix it. Carl Friedrich From brownan at gmail.com Wed Apr 6 16:12:13 2011 From: brownan at gmail.com (Andrew Brown) Date: Wed, 6 Apr 2011 10:12:13 -0400 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: <4D9C730E.2070904@gmx.de> References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> <4D9C66E9.8020003@gmx.de> <4D9C730E.2070904@gmx.de> Message-ID: No good, it still looks like this: http://i.imgur.com/nuLIf.png Chrome 12.0.712.0 dev on Ubuntu. -A On Wed, Apr 6, 2011 at 10:05 AM, Carl Friedrich Bolz wrote: > On 04/06/2011 04:03 PM, Andrew Brown wrote: > >> Hmm, looks like the line numbers for the JIT trace output are >> mis-aligned, although it may just be my browser (Chrome beta). Looks >> fine in Firefox. Oh well. >> > > Can you re-load? I tried to fix it. > > Carl Friedrich > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimaqq at gmail.com Wed Apr 6 16:55:36 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Wed, 6 Apr 2011 07:55:36 -0700 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> <4D9C66E9.8020003@gmx.de> <4D9C730E.2070904@gmx.de> Message-ID: line numbers are totally broken in opera - looks like double-digit number are split and spilt to the next line safari looks better, but still as if line numbers are offset half a line, so as if numbers point between the source code lines. I used SyntaxHighlighter in blogs before, that works, highlights well, gives you line numbers and doesn't interfere with selection. It's presumably tested with all the browsers out there and it's a simple drop-in. d. On 6 April 2011 07:12, Andrew Brown wrote: > No good, it still looks like this:?http://i.imgur.com/nuLIf.png > Chrome?12.0.712.0 dev on Ubuntu. > -A > > On Wed, Apr 6, 2011 at 10:05 AM, Carl Friedrich Bolz wrote: >> >> On 04/06/2011 04:03 PM, Andrew Brown wrote: >>> >>> Hmm, looks like the line numbers for the JIT trace output are >>> mis-aligned, although it may just be my browser (Chrome beta). Looks >>> fine in Firefox. Oh well. >> >> Can you re-load? I tried to fix it. >> >> Carl Friedrich > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From igouy2 at yahoo.com Wed Apr 6 17:18:03 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Wed, 6 Apr 2011 08:18:03 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <646173.36708.qm@web65603.mail.ac4.yahoo.com> --- On Wed, 4/6/11, Maciej Fijalkowski wrote: -snip- > > Do you mean the program you contributed is badly > skewed towards CPython? > > > > http://shootout.alioth.debian.org/u32/program.php?test=nbody?=pypy&id=1 > > > > No > > > > > Do you mean that the n-body problem is badly skewed > towards CPython? > > No, that would be nonsense. I would never discuss whether > those > benchmarks does represent typical workflow in language X > because it's > impossible to find such a set that's true for every X. I > never did > discuss the choice of problems. > > > > > Your PyPy program is shown as so much faster - how is > that "badly skewed towards CPython"? > > > > That's true, but that's one that got through. I don't see how the program you contributed could be described as "one that got through". Here's what happened - I noticed the n-body program failed with PyPy, I asked you guys about the problem and was told "we have nbody_modified in our benchmarks" and then I asked you guys to contribute your modified program. http://morepypy.blogspot.com/2010/03/introducing-pypy-12-release.html Once you contributed the program modified for PyPy it was displayed on the website within 3 hours. > For example reverse complement (the current version) is > skewed towards CPython. If only someone could manage to write a reverse complement skewed towards PyPy without using libc.write ;-) > I'm fine with saying that ctypes (or numpy) are not allowed, > with a good explanation (and maybe an explanation why custom malloc > library is allowed for C and gcbench). > > Another question which was raised - are programs that only > work on PyPy allowed? (Due to pypy's extensions or cpython bugs). PyPy extensions - No. CPython bugs - How strange that the CPython bug was never mentioned! - maybe. > Since programs that only compile on GCC clearly are. How many C language implementations are shown? How many Python language implementations are shown? If only one Python language implementation was shown do you think it would be PyPy ? From fijall at gmail.com Wed Apr 6 17:32:08 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 6 Apr 2011 17:32:08 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <646173.36708.qm@web65603.mail.ac4.yahoo.com> References: <646173.36708.qm@web65603.mail.ac4.yahoo.com> Message-ID: On Wed, Apr 6, 2011 at 5:18 PM, Isaac Gouy wrote: > > > --- On Wed, 4/6/11, Maciej Fijalkowski wrote: > > -snip- >> > Do you mean the program you contributed is badly >> skewed towards CPython? >> > >> > http://shootout.alioth.debian.org/u32/program.php?test=nbody?=pypy&id=1 >> > >> >> No >> >> > >> > Do you mean that the n-body problem is badly skewed >> towards CPython? >> >> No, that would be nonsense. I would never discuss whether >> those >> benchmarks does represent typical workflow in language X >> because it's >> impossible to find such a set that's true for every X. I >> never did >> discuss the choice of problems. >> >> > >> > Your PyPy program is shown as so much faster - how is >> that "badly skewed towards CPython"? >> > >> >> That's true, but that's one that got through. > > > I don't see how the program you contributed could be described as "one that got through". > > Here's what happened - I noticed the n-body program failed with PyPy, I asked you guys about the problem and was told "we have nbody_modified in our benchmarks" and then I asked you guys to contribute your modified program. > > http://morepypy.blogspot.com/2010/03/introducing-pypy-12-release.html > > Once you contributed the program modified for PyPy it was displayed on the website within 3 hours. > Cool, thank you. > > >> For example reverse complement (the current version) is >> skewed towards CPython. > > If only someone could manage to write a reverse complement skewed towards PyPy without using libc.write ;-) > Deal, we'll do that. > > >> I'm fine with saying that ctypes (or numpy) are not allowed, >> with a good explanation (and maybe an explanation why custom malloc >> library is allowed for C and gcbench). >> >> Another question which was raised - are programs that only >> work on PyPy allowed? (Due to pypy's extensions or cpython bugs). > > PyPy extensions - No. > > CPython bugs - How strange that the CPython bug was never mentioned! - maybe. > Ok. The bug was not mentioned because it takes time to decide "it's a bug". > >> Since programs that only compile on GCC clearly are. > > How many C language implementations are shown? > > How many Python language implementations are shown? > > If only one Python language implementation was shown do you think it would be PyPy ? > I can't really read your mind, but from my opinion if the question is "how fast Python programs can be run" then the answer is Yes. So the position is that GCC is allowed to use extensions because it's the only C implementation shown and PyPy is not, because all Python programs should run on each runtime, is that correct? I'm not stating an opinion about it, I just want to know what the rules are. > > > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev From dasdasich at googlemail.com Wed Apr 6 18:52:24 2011 From: dasdasich at googlemail.com (DasIch) Date: Wed, 6 Apr 2011 18:52:24 +0200 Subject: [pypy-dev] [GSoC] Developing a benchmark suite (for Python 3.x) Message-ID: Hello Guys, I would like to present my proposal for the Google Summer of Code, concerning the idea of porting the benchmarks to Python 3.x for speed.pypy.org. I think I have successfully integrated the feedback I got from prior discussions on the topic and I would like to hear your opinion. Abstract ======= As of now there are several benchmark suites used by Python implementations, PyPy[1] uses the benchmarks developed for the Unladen Swallow[2] project as well as several other benchmarks they implemented on their own, CPython[3] uses the Unladen Swallow benchmarks and several "crap benchmarks used for historical reasons"[4]. This makes comparisons unnecessarily hard and causes confusion. As a solution to this problem I propose merging the existing benchmarks - at least those considered worth having - into a single benchmark suite which can be shared by all implementations and ported to Python 3.x. Milestones The project can be divided into several milestones: 1. Definition of the benchmark suite. This will entail contacting developers of Python implementations (CPython, PyPy, IronPython and Jython), via discussion on the appropriate mailing lists. This might be achievable as part of this proposal. 2. Implementing the benchmark suite. Based on the prior agreed upon definition, the suite will be implemented, which means that the benchmarks will be merged into a single mercurial repository on Bitbucket[5]. 3. Porting the suite to Python 3.x. The suite will be ported to 3.x using 2to3[6], as far as possible. The usage of 2to3 will make it easier make changes to the repository especially for those still focusing on 2.x. It is to be expected that some benchmarks cannot be ported due to dependencies which are not available on Python 3.x. Those will be ignored by this project to be ported at a later time, when the necessary requirements are met. Start of Program (May 24) ====================== Before the coding, milestones 2 and 3, can begin it is necessary to agree upon a set of benchmarks, everyone is happy with, as described. Midterm Evaluation (July 12) ======================= During the midterm I want to finish the second milestone and before the evaluation I want to start in the third milestone. Final Evaluation (Aug 16) ===================== In this period the benchmark suite will be ported. If everything works out perfectly I will even have some time left, if there are problems I have a buffer here. Probably Asked Questions ====================== Why not use one of the existing benchmark suites for porting? The effort will be wasted if there is no good base to build upon, creating a new benchmark suite based upon the existing ones ensures that. Why not use Git/Bazaar/...? Mercurial is used by CPython, PyPy and is fairly well known and used in the Python community. This ensures easy accessibility for everyone. What will happen with the Repository after GSoC/How will access to the repository be handled? I propose to give administrative rights to one or two representatives of each project. Those will provide other developers with write access. Communication ============= Communication of the progress will be done via Twitter[7] and my blog[8], if desired I can also send an email with the contents of the blog post to the mailing lists of the implementations. Furthermore I am usually quick to answer via IRC (DasIch on freenode), Twitter or E-Mail(dasdasich at gmail.com) if anyone has any questions. Contact to the mentor can be established via the means mentioned above or via Skype. About Me ======== My name is Daniel Neuh?user, I am 19 years old and currently a student at the Bergstadt-Gymnasium L?denscheid[9]. I started programming (with Python) about 4 years ago and became a member of the Pocoo Team[10] after successfully participating in the Google Summer of Code last year, during which I ported Sphinx[11] to Python 3.x and implemented an algorithm to diff abstract syntax trees to preserve comments and translated strings which has been used by the other GSoC projects targeting Sphinx. .. [1]: https://bitbucket.org/pypy/benchmarks/src .. [2]: http://code.google.com/p/unladen-swallow/ .. [3]: http://hg.python.org/benchmarks/file/tip/performance .. [4]: http://hg.python.org/benchmarks/file/62e754c57a7f/performance/README .. [5]: http://bitbucket.org/ .. [6]: http://docs.python.org/library/2to3.html .. [7]: http://twitter.com/#!/DasIch .. [8]: http://dasdasich.blogspot.com/ .. [9]: http://bergstadt-gymnasium.de/ .. [10]: http://www.pocoo.org/team/#daniel-neuhauser .. [11]: http://sphinx.pocoo.org/ P.S.: I would like to get in touch with the IronPython developers as well, unfortunately I was not able to find a mailing list or IRC channel is there anybody how can send me in the right direction? From igouy2 at yahoo.com Wed Apr 6 19:33:20 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Wed, 6 Apr 2011 10:33:20 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <795302.82949.qm@web65602.mail.ac4.yahoo.com> --- On Wed, 4/6/11, Maciej Fijalkowski wrote: -snip- > > CPython bugs - How strange that the CPython bug was > never mentioned! - maybe. > > > > Ok. The bug was not mentioned because it takes time to > decide "it's a bug". I know someone decided "it's a bug" because someone said so in a blog post they pushed out across the blogosphere and proggit and Hacker News and ... How strange that CPython bug was never mentioned to me! > Since programs that only compile on GCC clearly > are. > > > > How many C language implementations are shown? > > > > How many Python language implementations are shown? > > > > If only one Python language implementation was shown > do you think it would be PyPy ? > > > > I can't really read your mind, but from my opinion if the > question is "how fast Python programs can be run" then the > answer is Yes. The Help page says something about showing working programs written in less familiar programming languages. > So the position is that GCC is allowed to use extensions > because it's the only C implementation shown and PyPy is not, > because all Python programs should run on each runtime, is that > correct? I don't see a way to compare CPython and PyPy unless the comparison programs do at least run on both CPython and PyPy (and x86 and x64). From fijall at gmail.com Wed Apr 6 19:40:50 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 6 Apr 2011 19:40:50 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <795302.82949.qm@web65602.mail.ac4.yahoo.com> References: <795302.82949.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Wed, Apr 6, 2011 at 7:33 PM, Isaac Gouy wrote: > > --- On Wed, 4/6/11, Maciej Fijalkowski wrote: > > -snip- > >> > CPython bugs - How strange that the CPython bug was >> never mentioned! - maybe. >> > >> >> Ok. The bug was not mentioned because it takes time to >> decide "it's a bug". > > > I know someone decided "it's a bug" because someone said so in a blog post they pushed out across the blogosphere and proggit and Hacker News and ... > > How strange that CPython bug was never mentioned to me! > > >> Since programs that only compile on GCC clearly >> are. >> > >> > How many C language implementations are shown? >> > >> > How many Python language implementations are shown? >> > >> > If only one Python language implementation was shown >> do you think it would be PyPy ? >> > >> >> I can't really read your mind, but from my opinion if the >> question is "how fast Python programs can be run" then the >> answer is Yes. > > The Help page says something about showing working programs written in less familiar programming languages. > Ok, will look it up later. > >> So the position is that GCC is allowed to use extensions >> because it's the only C implementation shown and PyPy is not, >> because all Python programs should run on each runtime, is that >> correct? > > I don't see a way to compare CPython and PyPy unless the comparison programs do at least run on both CPython and PyPy (and x86 and x64). > Well I can always write a program that runs on both (and uses more efficient data structure for pypy for example). The question is a bit academic, because I don't have any particular implementation now in mind. But would be cool to be able to do that. From ademan555 at gmail.com Wed Apr 6 21:49:56 2011 From: ademan555 at gmail.com (Dan Roberts) Date: Wed, 6 Apr 2011 12:49:56 -0700 Subject: [pypy-dev] Pypy custom interpreter JIT question In-Reply-To: References: <4D9474A4.3020909@gmail.com> <4D94F995.5020509@gmail.com> <4D99E219.3050308@gmx.de> <4D99F325.5050605@gmail.com> <4D9A0A29.1000603@gmail.com> <4D9B0FE2.9020003@gmx.de> <4D9C66E9.8020003@gmx.de> Message-ID: Hey sorry about that, I'm working full time now, and balancing life, school and full time work is new for me... http://paste.pocoo.org/show/366731/ Is it in its current state. I actually haven't tried it on "stock" PyPy. I wrote an extra JIT optimization in the fold_intadd branch, which gave me around 50% on mandelbrot, it may yield more for you, you might try it out (I hope to have it merged once I get another set of eyes on it, but like I said, life's been hectic, so I haven't gotten the ball rolling on code review yet). My 99bottles.bf performance is still abysmal. I'm going to go and implement caching of matching brackets right now (lunch break woo!) and see what that does for my performance. Cheers, Dan On Wed, Apr 6, 2011 at 7:03 AM, Andrew Brown wrote: > Hmm, looks like the line numbers for the JIT trace output are mis-aligned, > although it may just be my browser (Chrome beta). Looks fine in Firefox. Oh > well. > But anyways... > > On Wed, Apr 6, 2011 at 9:13 AM, Carl Friedrich Bolz wrote: >> >> Thanks again, Andrew! FWIW, the first post is already on place four of >> all PyPy blog posts in the ranking of page impressions. > > You're welcome! That's awesome to hear, I'm glad I could contribute. > Also, Dan, if you wanted to post your version I'm curious to see your > approach. > -Andrew From santagada at gmail.com Thu Apr 7 04:31:09 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Wed, 6 Apr 2011 23:31:09 -0300 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <795302.82949.qm@web65602.mail.ac4.yahoo.com> References: <795302.82949.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Wed, Apr 6, 2011 at 2:33 PM, Isaac Gouy wrote: > >> So the position is that GCC is allowed to use extensions >> because it's the only C implementation shown and PyPy is not, >> because all Python programs should run on each runtime, is that >> correct? > > I don't see a way to compare CPython and PyPy unless the comparison programs do at least run on both CPython and PyPy (and x86 and x64). I don't see why this is, it is the same as comparing python to ruby, I want to see how fast can you make a program in said vm that does the same task. If the description of the problem doesn't limit what you can use I really don't see why can't you use a PyPy (or cpython) extension for it. For me a shootout without extensions (at least without numpy) is just comparing how fast a language can do without anything, which is not interesting at all. One where c programs can use libraries but python cannot is even more meaningless. -- Leonardo Santagada From fijall at gmail.com Thu Apr 7 05:52:31 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 7 Apr 2011 05:52:31 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <795302.82949.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Thu, Apr 7, 2011 at 4:31 AM, Leonardo Santagada wrote: > On Wed, Apr 6, 2011 at 2:33 PM, Isaac Gouy wrote: >> >>> So the position is that GCC is allowed to use extensions >>> because it's the only C implementation shown and PyPy is not, >>> because all Python programs should run on each runtime, is that >>> correct? >> >> I don't see a way to compare CPython and PyPy unless the comparison programs do at least run on both CPython and PyPy (and x86 and x64). > > I don't see why this is, it is the same as comparing python to ruby, I > want to see how fast can you make a program in said vm that does the > same task. If the description of the problem doesn't limit what you > can use I really don't see why can't you use a PyPy (or cpython) > extension for it. > > For me a shootout without extensions (at least without numpy) is just > comparing how fast a language can do without anything, which is not > interesting at all. One where c programs can use libraries but python > cannot is even more meaningless. Leonardo please calm down a bit. I can see reasons and why I might not agree with that, they do make sense and can be justified. It does make sense to compare CPython and PyPy on the same set of benchmarks (actually that's what we do with speed.pypy.org, we deliberately tried to avoid modifying benchmarks). > > -- > Leonardo Santagada > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From vincent.legoll at gmail.com Thu Apr 7 09:51:55 2011 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Thu, 7 Apr 2011 09:51:55 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <795302.82949.qm@web65602.mail.ac4.yahoo.com> Message-ID: Hello, On Thu, Apr 7, 2011 at 5:52 AM, Maciej Fijalkowski wrote: > It does make sense to compare CPython and > PyPy on the same set of benchmarks (actually that's what we do with > speed.pypy.org, we deliberately tried to avoid modifying benchmarks). But speed.pypy.org is about comparing cpython vs pypy, whereas the benchmark game compares a lot of quite different laguages, that is not exactly the same thing. So, that may warrant different rules... -- Vincent Legoll From damonmc at gmail.com Thu Apr 7 19:15:35 2011 From: damonmc at gmail.com (Damon McCormick) Date: Thu, 7 Apr 2011 10:15:35 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <795302.82949.qm@web65602.mail.ac4.yahoo.com> Message-ID: The benchmark game also compares on code size. So if PyPy provides better performance with smaller code size (i.e. if allows you to write something in the most concise, Pythonic way and get great performance), this may not show up unless PyPy can run a different version of a benchmark that actually uses less code. (Disclaimer: it's been a few years since I looked at the benchmark game Python programs. Maybe they're already written very concisely, in which case this point is moot) -Damon On Thu, Apr 7, 2011 at 12:51 AM, Vincent Legoll wrote: > Hello, > > On Thu, Apr 7, 2011 at 5:52 AM, Maciej Fijalkowski > wrote: > > It does make sense to compare CPython and > > PyPy on the same set of benchmarks (actually that's what we do with > > speed.pypy.org, we deliberately tried to avoid modifying benchmarks). > > But speed.pypy.org is about comparing cpython vs pypy, whereas > the benchmark game compares a lot of quite different laguages, that > is not exactly the same thing. So, that may warrant different rules... > > -- > Vincent Legoll > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimaqq at gmail.com Thu Apr 7 20:16:18 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Thu, 7 Apr 2011 11:16:18 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <795302.82949.qm@web65602.mail.ac4.yahoo.com> Message-ID: btw, benchmark game also tracks memory footprint, pypy is a little liberal here for small benchmarks, which there are many. On 7 April 2011 10:15, Damon McCormick wrote: > The benchmark game also compares on code size. ?So if PyPy provides better > performance with smaller code size (i.e. if allows you to write something in > the most concise, Pythonic way and get great performance), this may not show > up unless PyPy can run a different version of a benchmark that actually uses > less code. > (Disclaimer: it's been a few years since I looked at the benchmark game > Python programs. ?Maybe they're already written very concisely, in which > case this point is moot) > -Damon > > On Thu, Apr 7, 2011 at 12:51 AM, Vincent Legoll > wrote: >> >> Hello, >> >> On Thu, Apr 7, 2011 at 5:52 AM, Maciej Fijalkowski >> wrote: >> > It does make sense to compare CPython and >> > PyPy on the same set of benchmarks (actually that's what we do with >> > speed.pypy.org, we deliberately tried to avoid modifying benchmarks). >> >> But speed.pypy.org is about comparing cpython vs pypy, whereas >> the benchmark game compares a lot of quite different laguages, that >> is not exactly the same thing. So, that may warrant different rules... >> >> -- >> Vincent Legoll >> _______________________________________________ >> pypy-dev at codespeak.net >> http://codespeak.net/mailman/listinfo/pypy-dev > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From romain.py at gmail.com Fri Apr 8 19:34:51 2011 From: romain.py at gmail.com (Romain Guillebert) Date: Fri, 8 Apr 2011 18:34:51 +0100 Subject: [pypy-dev] Gothenburg sprint and sprint fund Message-ID: <20110408173451.GA6615@ubuntu> Hi I would like to attends to the sprint at the end of this month but I can afford the plane, the train (I come from Ireland and there is no direct plane between Dublin and Gothenburg) and the accomodation so I would like to know if I can get help from the PyPy sprint fund. >From what I previewed it would cost : - 17? * 2 for the bus between my town and Dublin's airport - 397.75? for the plane between Dublin and Copenhagen (It's really expensive in my opinion but I can't find something cheaper) - 394 SEK * 2 for the train between Copenhagen and Gothenburg I plan to stay from the 23rd to the 1st, I will sleep at SGS Veckobostader, I don't know if I can share a room with someone but I would stay from the 23rd to the 1st. I can handle some of it but I won't be able to make it without help. Thanks Romain From igouy2 at yahoo.com Fri Apr 8 19:43:15 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Fri, 8 Apr 2011 10:43:15 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <996929.61539.qm@web65602.mail.ac4.yahoo.com> The benchmarks game web pages now only show one language implementation for each programming language. Java -Xint, Tracemonkey JavaScript, LuaJIT, CPython, Iron Python, PyPy, Ruby 1.8.7 and JRuby 1.6 are no longer shown. From fijall at gmail.com Fri Apr 8 20:11:55 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 8 Apr 2011 20:11:55 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <996929.61539.qm@web65602.mail.ac4.yahoo.com> References: <996929.61539.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Fri, Apr 8, 2011 at 7:43 PM, Isaac Gouy wrote: > The benchmarks game web pages now only show one language implementation for each programming language. > > Java -Xint, Tracemonkey JavaScript, LuaJIT, CPython, Iron Python, PyPy, Ruby 1.8.7 and JRuby 1.6 are no longer shown. > As I understand this is CPython 3.2 for Python right? anyway, what's the point of the above discussion then? > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From dasdasich at googlemail.com Fri Apr 8 20:21:12 2011 From: dasdasich at googlemail.com (DasIch) Date: Fri, 8 Apr 2011 20:21:12 +0200 Subject: [pypy-dev] [GSoC] Developing a benchmark suite (for Python 3.x) In-Reply-To: References: Message-ID: I talked to Fijal about my project last night, the result is that basically the project as is, is not that interesting because the means to execute the benchmarks on multiple interpreters are currently missing. Another point we talked about was that porting the benchmarks would not be very useful as the interesting ones all have dependencies which have not (yet) been ported to Python 3.x. The first point, execution on multiple interpreters, has to be solved or this project is pretty much pointless, therefore I've changed my proposal to include just that. However the proposal still includes porting the benchmarks although this is planned to happen after the development of an application able to run the benchmarks on multiple interpreters. The reason for this is that even though the portable benchmarks might not prove to be that interesting the basic stuff for porting using 2to3 would be there, making it easier to port benchmarks in the future, as the dependencies become available under Python 3.x. However I plan to do that after implementing the prior mentioned application putting the application at higher priority. This way, should I not be able to complete all my goals, it is unlikely that anything but the porting will suffer and the project would still produce useful results during the GSoC. Anyway here is the current, updated, proposal: Abstract ======= As of now there are several benchmark suites used by Python implementations, PyPy uses the benchmarks[1] developed for the Unladen Swallow[2] project as well as several other benchmarks they implemented on their own, CPython[3] uses the Unladen Swallow benchmarks and several "crap benchmarks used for historical reasons"[4]. This makes comparisons unnecessarily hard and causes confusion. As a solution to this problem I propose merging the existing benchmarks - at least those considered worth having - into a single benchmark suite which can be shared by all implementations and ported to Python 3.x. Another problem reported by Maciej Fijalkowski is that currenly the way benchmarks are executed by PyPy is more or less a hack. Work will have to be done to allow execution of the benchmarks on different interpreters and their most recent versions (from their respective repositories). The application for this should also be able to upload the results to a codespeed instance such as http://speed.pypy.org. Milestones ========= The project can be divided into several milestones: 1. Definition of the benchmark suite. This will entail contacting developers of Python implementations (CPython, PyPy, IronPython and Jython), via discussion on the appropriate mailing lists. This might be achievable as part of this proposal. 2. Merging the benchmarks. Based on the prior agreed upon definition, the benchmarks will be merged into a single suite. 3. Implementing a system to run the benchmarks. In order to execute the benchmarks it will be necessary to have a configurable application which downloads the interpreters from their repositories, builds them and executes the benchmarks with them. 4. Porting the suite to Python 3.x. The suite will be ported to 3.x using 2to3[5], as far as possible. The usage of 2to3 will make it easier make changes to the repository especially for those still focusing on 2.x. It is to be expected that some benchmarks cannot be ported due to dependencies which are not available on Python 3.x. Those will be ignored by this project to be ported at a later time, when the necessary requirements are met. Start of Program (May 24) ====================== Before the coding, milestones 2 and 3, can begin it is necessary to agree upon a set of benchmarks, everyone is happy with, as described. Midterm Evaluation (July 12) ======================= During the midterm I want to merge the benchmarks and implement a way to execute them. Final Evaluation (Aug 16) ===================== In this period the benchmark suite will be ported. If everything works out perfectly I will even have some time left, if there are problems I have a buffer here. Implementation of the Benchmark Runner ================================== In order to run the benchmarks I propose a simple application which can be configured to download multiple interpreters, to build them and execute the benchmarks. The configuration could be similar to tox[6], downloads of the interpreters could be handled using anyvc[7]. For a site such as http://speed.pypy.org a cronjob, buildbot or whatelse is preferred, could be setup which executes the application regularly. Repository Handling ================ The code for the project will be developed in a Mercurial[8] repository hosted on Bitbucket[9], both PyPy and CPython use Mercurial and most people in the Python community should be able to use it. Probably Asked Questions ====================== Why not use one of the existing benchmark suites for porting? The effort will be wasted if there is no good base to build upon, creating a new benchmark suite based upon the existing ones ensures that. Why not use Git/Bazaar/...? Mercurial is used by CPython, PyPy and is fairly well known and used in the Python community. This ensures easy accessibility for everyone. What will happen with the Repository after GSoC/How will access to the repository be handled? I propose to give administrative rights to one or two representatives of each project. Those will provide other developers with write access. Communication ============= Communication of the progress will be done via Twitter[10] and my blog[11], if desired I can also send an email with the contents of the blog post to the mailing lists of the implementations. Furthermore I am usually quick to answer via IRC(DasIch on freenode), Twitter or E-Mail(dasdasich at gmail.com) if anyone has any questions. Contact to the mentor can be established via the means mentioned above or via Skype. About Me ======== My name is Daniel Neuh?user, I am 19 years old and currently a student at the Bergstadt-Gymnasium L?denscheid[12]. I started programming (with Python) about 4 years ago and became a member of the Pocoo Team[13] after successfully participating in the Google Summer of Code last year, during which I ported Sphinx[14] to Python 3.x and implemented an algorithm to diff abstract syntax trees to preserve comments and translated strings which has been used by the other GSoC projects targeting Sphinx. .. [1]: https://bitbucket.org/pypy/benchmarks/src .. [2]: http://code.google.com/p/unladen-swallow/ .. [3]: http://hg.python.org/benchmarks/file/tip/performance .. [4]: http://hg.python.org/benchmarks/file/62e754c57a7f/performance/README .. [5]: http://docs.python.org/library/2to3.html .. [6]: http://codespeak.net/tox/ .. [7]: http://anyvc.readthedocs.org/en/latest/?redir .. [8]: http://mercurial.selenic.com/ .. [9]: https://bitbucket.org/ .. [10]: http://twitter.com/#!/DasIch .. [11]: http://dasdasich.blogspot.com/ .. [12]: http://bergstadt-gymnasium.de/ .. [13]: http://www.pocoo.org/team/#daniel-neuhauser .. [14]: http://sphinx.pocoo.org/ From igouy2 at yahoo.com Fri Apr 8 20:30:06 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Fri, 8 Apr 2011 11:30:06 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <946445.91725.qm@web65615.mail.ac4.yahoo.com> --- On Fri, 4/8/11, Maciej Fijalkowski wrote: > > The benchmarks game web pages now only show one > language implementation for each programming language. > > > > Java -Xint, Tracemonkey JavaScript, LuaJIT, CPython, > Iron Python, PyPy, Ruby 1.8.7 and JRuby 1.6 are no longer > shown. > > > > As I understand this is CPython 3.2 for Python right? CPython 2.7 is no longer shown - 3.2 is still shown. > anyway, what's the point of the above discussion then? The point of the discussion was to hear the views of pypy-dev, and I have. From fijall at gmail.com Fri Apr 8 20:39:33 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 8 Apr 2011 20:39:33 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <946445.91725.qm@web65615.mail.ac4.yahoo.com> References: <946445.91725.qm@web65615.mail.ac4.yahoo.com> Message-ID: On Fri, Apr 8, 2011 at 8:30 PM, Isaac Gouy wrote: > > > --- On Fri, 4/8/11, Maciej Fijalkowski wrote: > >> > The benchmarks game web pages now only show one >> language implementation for each programming language. >> > >> > Java -Xint, Tracemonkey JavaScript, LuaJIT, CPython, >> Iron Python, PyPy, Ruby 1.8.7 and JRuby 1.6 are no longer >> shown. >> > >> >> As I understand this is CPython 3.2 for Python right? > > CPython 2.7 is no longer shown - 3.2 is still shown. > > >> anyway, what's the point of the above discussion then? > > The point of the discussion was to hear the views of pypy-dev, and I have. > I guess a lot of discussions are about getting some sort of consensus. I see this one is so you can know what we think and that's it. Well, that comes as a bit of surprise. I think it's super stupid to remove Tracemonkey, LuaJIT and PyPy from it, but that's as you pointed out *your* website. On the other hand it's good, because people won't cite the computer language shootout anymore and those benchmarks are more silly than they have to be. Farewell. > > > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From igouy2 at yahoo.com Fri Apr 8 21:03:06 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Fri, 8 Apr 2011 12:03:06 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <734328.13811.qm@web65602.mail.ac4.yahoo.com> --- On Fri, 4/8/11, Maciej Fijalkowski wrote: -snip- > I guess a lot of discussions are about getting some sort of > consensus. I see this one is so you can know what we think > and that's it. Well, that comes as a bit of surprise. I don't know why that would come as any surprise - "Of course, I'll make up my own mind but at least I'll be able to take your wishes into account." http://permalink.gmane.org/gmane.comp.python.pypy/7303 > I think it's super stupid to remove Tracemonkey, LuaJIT and > PyPy from it, but that's as you pointed out *your* website. > On the other hand it's good, because people won't cite the > computer language shootout anymore and those benchmarks are > more silly than they have to be. You express both of the contrary wishes that I've heard here this week - you seem to want yes and no :-) fwiw someone did write - "While a comparison between languages may be interesting, maybe having 1 implementation per language in the shootout would work better." - let's hope at least they are happy now. From fijall at gmail.com Fri Apr 8 21:12:06 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 8 Apr 2011 21:12:06 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <734328.13811.qm@web65602.mail.ac4.yahoo.com> References: <734328.13811.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Fri, Apr 8, 2011 at 9:03 PM, Isaac Gouy wrote: > > > --- On Fri, 4/8/11, Maciej Fijalkowski wrote: > > -snip- >> I guess a lot of discussions are about getting some sort of >> consensus. I see this one is so you can know what we think >> and that's it. Well, that comes as a bit of surprise. > > I don't know why that would come as any surprise - "Of course, I'll make up my own mind but at least I'll be able to take your wishes into account." > > http://permalink.gmane.org/gmane.comp.python.pypy/7303 > > > >> I think it's super stupid to remove Tracemonkey, LuaJIT and >> PyPy from it, but that's as you pointed out *your* website. >> On the other hand it's good, because people won't cite the >> computer language shootout anymore and those benchmarks are >> more silly than they have to be. > > You express both of the contrary wishes that I've heard here this week - you seem to want yes and no :-) No. I think it's bad for you good for us. > > fwiw someone did write - "While a comparison between languages may be interesting, maybe having 1 implementation per language in the shootout would work better." - let's hope at least they are happy now. > I sure hope so. > > > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From piotr.skamruk at gmail.com Fri Apr 8 21:19:15 2011 From: piotr.skamruk at gmail.com (Piotr Skamruk) Date: Fri, 8 Apr 2011 21:19:15 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <734328.13811.qm@web65602.mail.ac4.yahoo.com> References: <734328.13811.qm@web65602.mail.ac4.yahoo.com> Message-ID: 2011/4/8 Isaac Gouy : > [...] >> I think it's super stupid to remove Tracemonkey, LuaJIT and >> PyPy from it, but that's as you pointed out *your* website. >> On the other hand it's good, because people won't cite the >> computer language shootout anymore and those benchmarks are >> more silly than they have to be. > > You express both of the contrary wishes that I've heard here this week - you seem to want yes and no :-) > > fwiw someone did write - "While a comparison between languages may be interesting, maybe having 1 implementation per language in the shootout would work better." - let's hope at least they are happy now. sorry, but now it's not a comparsion of languages, but comparison of several randomly selected programs. remember that You are not comparing languages, but their implementations (still that looks like randomly choosen - cpython 3.2 have much, much less usage than 2.6/2.7). From dimaqq at gmail.com Fri Apr 8 21:48:54 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Fri, 8 Apr 2011 12:48:54 -0700 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <734328.13811.qm@web65602.mail.ac4.yahoo.com> Message-ID: Overall one language implementation per language is a good idea for shootout. Consequently, using the latest version of most widely used implementation (cpython 3.2) is indeed the way to go, even in the face of all the 2.6 recalcitrants, myself included :-) As for pypy, I think it is in the pypy's best interest to be represented as widely as possible, I am surprised to see negative attitude here... Though I understand pypy implementation of these benchmarks requires work; btw that would be a great project for, say, summer of code. Coming back to Isaac's question, perhaps you want to a tiered representation for languages, e.g. compiled vs dynamic, then inside dynamic perl vs python, then inside that cpython 2 vs cpython 3 vs pypy. This is really a matter of presentation. Alternative is to list "supported" implementations separate from that untested/contributed/questionable/etc. Coming back to presentation, shootout is pretty old by now, compare it to speed.pypy.org which is outright sexy! Also you need serious QA of the benchmarks, rules, implementations and even the test environment itself. Dima Tisnek On 8 April 2011 12:19, Piotr Skamruk wrote: > 2011/4/8 Isaac Gouy : >> [...] >>> I think it's super stupid to remove Tracemonkey, LuaJIT and >>> PyPy from it, but that's as you pointed out *your* website. >>> On the other hand it's good, because people won't cite the >>> computer language shootout anymore and those benchmarks are >>> more silly than they have to be. >> >> You express both of the contrary wishes that I've heard here this week - you seem to want yes and no :-) >> >> fwiw someone did write - "While a comparison between languages may be interesting, maybe having 1 implementation per language in the shootout would work better." - let's hope at least they are happy now. > sorry, but now it's not a comparsion of languages, but comparison of > several randomly selected programs. > remember that You are not comparing languages, but their > implementations (still that looks like randomly choosen - cpython 3.2 > have much, much less usage than 2.6/2.7). > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From piotr.skamruk at gmail.com Fri Apr 8 22:22:26 2011 From: piotr.skamruk at gmail.com (Piotr Skamruk) Date: Fri, 8 Apr 2011 22:22:26 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: References: <734328.13811.qm@web65602.mail.ac4.yahoo.com> Message-ID: 2011/4/8 Dima Tisnek : > Overall one language implementation per language is a good idea for > shootout. Consequently, using the latest version of most widely used > implementation (cpython 3.2) is indeed the way to go, even in the face > of all the 2.6 recalcitrants, myself included :-) So last (3.2) or most used (2.6)? Sorry, but 3.x is not widely used for now... > As for pypy, I think it is in the pypy's best interest to be > represented as widely as possible, I am surprised to see negative > attitude here... Though I understand pypy implementation of these > benchmarks requires work; btw that would be a great project for, say, > summer of code. pypy is probably fastest implementation for now, but also it could not be described as "widely used" > Coming back to Isaac's question, perhaps you want to a tiered > representation for languages, e.g. compiled vs dynamic, then inside > dynamic perl vs python, then inside that cpython 2 vs cpython 3 vs > pypy. This is really a matter of presentation. Alternative is to list > "supported" implementations ?separate from that > untested/contributed/questionable/etc. Coming back to presentation, > shootout is pretty old by now, compare it to speed.pypy.org which is > outright sexy! Also you need serious QA of the benchmarks, rules, > implementations and even the test environment itself. If someone is comparing some implemetations of languages, than that person should probably choose most common used version of compiler/interpreter - not "most recent release/revision". shootout.alioth.debian.org is runned under debian domain, so it's for me hard to understand why py3.2 was choosen as python representation when it has minor usage even in debian... (still in debian 2.6 is more mainline, even 2.7 is not so supported outside of ubuntu) speed.pypy.org have resonable value probably mostly for developers, with strong focus on python developers. shootout.alioth.debian.org, as more general site, is focused on informing users less with less knowledge about programming. in current state comparsion in shootout.alioth.debian.org introduces more confusion than usefull information, what probably had in mind Maciej. From igouy2 at yahoo.com Fri Apr 8 22:38:51 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Fri, 8 Apr 2011 13:38:51 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <518937.62474.qm@web65614.mail.ac4.yahoo.com> --- On Fri, 4/8/11, Piotr Skamruk wrote: -snip- > remember that You are not comparing languages, but their > implementations That's correct! "Which programming languages are fastest? No. Which programming language implementations have the fastest benchmark programs?" http://shootout.alioth.debian.org/u32/which-programming-languages-are-fastest.php#chart From igouy2 at yahoo.com Fri Apr 8 23:25:28 2011 From: igouy2 at yahoo.com (Isaac Gouy) Date: Fri, 8 Apr 2011 14:25:28 -0700 (PDT) Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: Message-ID: <651381.86449.qm@web65607.mail.ac4.yahoo.com> --- On Fri, 4/8/11, Maciej Fijalkowski wrote: -snip- > No. I think it's bad for you good for us. Then you have your wish - be happy ;-) From jacob at openend.se Sat Apr 9 00:43:43 2011 From: jacob at openend.se (Jacob =?iso-8859-1?q?Hall=E9n?=) Date: Sat, 9 Apr 2011 00:43:43 +0200 Subject: [pypy-dev] PyPy in the benchmarks game - yes or no? In-Reply-To: <734328.13811.qm@web65602.mail.ac4.yahoo.com> References: <734328.13811.qm@web65602.mail.ac4.yahoo.com> Message-ID: <201104090043.50198.jacob@openend.se> Friday 08 April 2011 you wrote: > --- On Fri, 4/8/11, Maciej Fijalkowski wrote: > > -snip- > > > I guess a lot of discussions are about getting some sort of > > consensus. I see this one is so you can know what we think > > and that's it. Well, that comes as a bit of surprise. > > I don't know why that would come as any surprise - "Of course, I'll make up > my own mind but at least I'll be able to take your wishes into account." > > http://permalink.gmane.org/gmane.comp.python.pypy/7303 > > > I think it's super stupid to remove Tracemonkey, LuaJIT and > > PyPy from it, but that's as you pointed out *your* website. > > On the other hand it's good, because people won't cite the > > computer language shootout anymore and those benchmarks are > > more silly than they have to be. > > You express both of the contrary wishes that I've heard here this week - > you seem to want yes and no :-) > > fwiw someone did write - "While a comparison between languages may be > interesting, maybe having 1 implementation per language in the shootout > would work better." - let's hope at least they are happy now. I think you should consider your own wishes a little more. If you want the language shootout to be relevant to people, you can't ignore multiple implementations. Especially it seems to b excessively excentric to ignore the fastest implementations of some languages while not doing so for others. I assume you are not measuring C speed by the old AT&T reference implementation. I think you have to make up your mind. Do you want to provide a service to other people, in which case you should work on making fair and reasonable comparisons, mislead people, in which case you should continue on the current course. Perhaps you don't care either way; perhaps you are just in it for the ego boost. Then we want to find out, so we can stop wasting our time trying to get a reasonable outcome. I think you need to stop and think about what you really want, instead of emmotionally reacting to evolving events. When you have given it a good thought, you tell people what you have come up with and go off implementing it. Tose who like it will gather around, those who dislike it will go away. If you don't know what you want out of the language shootout, you might ask people in different camps what they would like. You will be surprised by the number of sane and reasonable answers you would get. Jacob Hall?n -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From getxsick at gmail.com Sat Apr 9 13:12:04 2011 From: getxsick at gmail.com (Bartosz SKOWRON) Date: Sat, 9 Apr 2011 13:12:04 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: <20110408173451.GA6615@ubuntu> References: <20110408173451.GA6615@ubuntu> Message-ID: On Fri, Apr 8, 2011 at 7:34 PM, Romain Guillebert wrote: > - 397.75? for the plane between Dublin and Copenhagen (It's really > ?expensive in my opinion but I can't find something cheaper) > - 394 SEK * 2 for the train between Copenhagen and Gothenburg Very quick research showed me a connection between Dublin and Gothenburg for 160?. From lac at openend.se Sat Apr 9 13:17:16 2011 From: lac at openend.se (Laura Creighton) Date: Sat, 09 Apr 2011 13:17:16 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: Message from Bartosz SKOWRON of "Sat, 09 Apr 2011 13:12:04 +0200." References: <20110408173451.GA6615@ubuntu> Message-ID: <201104091117.p39BHGsY006157@theraft.openend.se> What did you find? Because both of the Gothenburg airports claim to have no connections to Ireland whatsoever. Laura, who would be very pleasantly surprised if this is wrong. From getxsick at gmail.com Sat Apr 9 13:19:32 2011 From: getxsick at gmail.com (Bartosz SKOWRON) Date: Sat, 9 Apr 2011 13:19:32 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: <201104091117.p39BHGsY006157@theraft.openend.se> References: <20110408173451.GA6615@ubuntu> <201104091117.p39BHGsY006157@theraft.openend.se> Message-ID: On Sat, Apr 9, 2011 at 1:17 PM, Laura Creighton wrote: > What did you find? ?Because both of the Gothenburg airports claim to > have no connections to Ireland whatsoever. This is with a stop in London Stansted. However now I found a direct connection from Dublin to Stockholm Skavsta for 95? by RyanAir. It's really cheap. From lac at openend.se Sat Apr 9 13:52:13 2011 From: lac at openend.se (Laura Creighton) Date: Sat, 09 Apr 2011 13:52:13 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: Message from Bartosz SKOWRON of "Sat, 09 Apr 2011 13:19:32 +0200." References: <20110408173451.GA6615@ubuntu> <201104091117.p39BHGsY006157@theraft.openend.se> Message-ID: <201104091152.p39BqDw1009245@theraft.openend.se> In a message of Sat, 09 Apr 2011 13:19:32 +0200, Bartosz SKOWRON writes: >On Sat, Apr 9, 2011 at 1:17 PM, Laura Creighton wrote: > >> What did you find? =C2=A0Because both of the Gothenburg airports claim >to >> have no connections to Ireland whatsoever. > >This is with a stop in London Stansted. However now I found a direct >connection from Dublin to Stockholm Skavsta for by RyanAir. It's >really cheap. The Skavsta Airport really isn't in Stockholm -- it's 90 minutes away by bus. What is is close to is Nyk?ping, which is 7 km away. And you can catch a train to G?teborg from Nyk?ping. So this looks like a lot cheaper. Thank you. http://www.skavsta.se/sv/content/2/38/t%C3%A5g.html says that it costs 200 SEK to get to the Nyk?ping train station from Skavska airport by taxi, or 20 SEK if you take the local bus 515 or 715 Laura From lac at openend.se Sat Apr 9 14:05:15 2011 From: lac at openend.se (Laura Creighton) Date: Sat, 09 Apr 2011 14:05:15 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: Message from Laura Creighton of "Sat, 09 Apr 2011 13:52:13 +0200." <201104091152.p39BqDw1009245@theraft.openend.se> References: <20110408173451.GA6615@ubuntu> <201104091117.p39BHGsY006157@theraft.openend.se> <201104091152.p39BqDw1009245@theraft.openend.se> Message-ID: <201104091205.p39C5FPP010642@theraft.openend.se> In a message of Sat, 09 Apr 2011 13:52:13 +0200, Laura Creighton writes: re: >>This is with a stop in London Stansted. However now I found a direct >>connection from Dublin to Stockholm Skavsta for mailer didn't like-- laura > by RyanAir. It's >>really cheap. And I am getting a price of 39,99 Euros out (on the 23rd) and 9,98 if you leave on the 3rd. 59,99 if you leave on the 2nd. So yes, much cheaper. Thank you. Laura From getxsick at gmail.com Sat Apr 9 14:27:49 2011 From: getxsick at gmail.com (Bartosz SKOWRON) Date: Sat, 9 Apr 2011 14:27:49 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: <201104091205.p39C5FPP010642@theraft.openend.se> References: <20110408173451.GA6615@ubuntu> <201104091117.p39BHGsY006157@theraft.openend.se> <201104091152.p39BqDw1009245@theraft.openend.se> <201104091205.p39C5FPP010642@theraft.openend.se> Message-ID: On Sat, Apr 9, 2011 at 2:05 PM, Laura Creighton wrote: > And I am getting a price of 39,99 Euros out (on the 23rd) > and 9,98 if you leave on the 3rd. ?59,99 if you leave on the 2nd. > > So yes, much cheaper. ?Thank you. :) I'm glad I could help. Wish to come to the sprint either but can't make it for this date :( From lac at openend.se Sat Apr 9 14:38:53 2011 From: lac at openend.se (Laura Creighton) Date: Sat, 09 Apr 2011 14:38:53 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: Message from Bartosz SKOWRON of "Sat, 09 Apr 2011 14:27:49 +0200." References: <20110408173451.GA6615@ubuntu> <201104091117.p39BHGsY006157@theraft.openend.se> <201104091152.p39BqDw1009245@theraft.openend.se> <201104091205.p39C5FPP010642@theraft.openend.se> Message-ID: <201104091238.p39Ccr2J013284@theraft.openend.se> In a message of Sat, 09 Apr 2011 14:27:49 +0200, Bartosz SKOWRON writes: >On Sat, Apr 9, 2011 at 2:05 PM, Laura Creighton wrote: > >> And I am getting a price of 39,99 Euros out (on the 23rd) >> and 9,98 if you leave on the 3rd. =C2=A059,99 if you leave on the 2nd. >> >> So yes, much cheaper. =C2=A0Thank you. > >:) I'm glad I could help. Wish to come to the sprint either but can't >make it for this date :( That's really too bad. Are you going to Europython? Laura From getxsick at gmail.com Sat Apr 9 14:42:27 2011 From: getxsick at gmail.com (Bartosz SKOWRON) Date: Sat, 9 Apr 2011 14:42:27 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: <201104091238.p39Ccr2J013284@theraft.openend.se> References: <20110408173451.GA6615@ubuntu> <201104091117.p39BHGsY006157@theraft.openend.se> <201104091152.p39BqDw1009245@theraft.openend.se> <201104091205.p39C5FPP010642@theraft.openend.se> <201104091238.p39Ccr2J013284@theraft.openend.se> Message-ID: On Sat, Apr 9, 2011 at 2:38 PM, Laura Creighton wrote: > That's really too bad. ?Are you going to Europython? Yes, I'm going to come for the conference and the PyPy sprint. At least I will try to come ;-) From jacob at openend.se Sat Apr 9 22:50:07 2011 From: jacob at openend.se (Jacob =?utf-8?q?Hall=C3=A9n?=) Date: Sat, 9 Apr 2011 22:50:07 +0200 Subject: [pypy-dev] Thoughts on multithreading in PyPy Message-ID: <201104092250.14600.jacob@openend.se> I have been thinking about the multithreading problem in PyPy for a while and I have come up with an idea. I'd like to have feedback from people who know the codebase well. The first and hardest step is to change the PyPy runtime so that it can run multiple threads at the same time. To simplify matters, we allocate all external resources to one thread to start with. We assume that other threads don't use them. Neither do they call into extension modules that do messy things. Whenever we spawn a new thread, we give it its own object space and its own instance of the memory manager/garbage collector. Having gotten this far, we would have N threads that could run in parallell. Since they have no interaction with each other and no contetion for resources, they would require no locking mechanism. The thread with the external resources would still be dependent on the GIL, but the other ones wouldn't even see it. This setup would of course be utterly useless, because all but one of the threads would have no means of comunicationg their results to the world. So, in a second step, we provide for special data types that can be shared between threads. These would typically be allocated in non-movable memory, to avoid the complexity of garbage collection of memory with shared use. You can make simple fifo structures for communication between the threads and complex structures with advanced algorithms for dealing with shared access. In a third step, you may relax the requirement that the first thread owns all resources. You should be able to hand them out in a controlled manner. For instance, you may want to spawn a thread for each socket connection and have that thread deal with all the communication with the socket. Now I wonder about the feasability of the first step. How much global state would have to be wrapped in per-tread objects and how hard would that be? What other obstacles would there be to doing this change? I guess there is a complication with requesting memory from the kernel and returning memory, but I think that could be solved in more or less elegant ways. Jacob Hall?n -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From arigo at tunes.org Sat Apr 9 23:28:34 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 9 Apr 2011 23:28:34 +0200 Subject: [pypy-dev] Thoughts on multithreading in PyPy In-Reply-To: <201104092250.14600.jacob@openend.se> References: <201104092250.14600.jacob@openend.se> Message-ID: Hi Jacob, On Sat, Apr 9, 2011 at 10:50 PM, Jacob Hall?n wrote: > So, in a second step, we provide for special data types that can be shared > between threads. These would typically be allocated in non-movable memory, to > avoid the complexity of garbage collection of memory with shared use. You can > make simple fifo structures for communication between the threads and complex > structures with advanced algorithms for dealing with shared access. That's where the real issue is. You can come up with some reasonable API to communicate with other threads, but precisely, they will be some API, which means that they will only work in programs written specifically to use them. Designing a new API (at the level of the Python language) is something we carefully avoided so far in PyPy; but it's possible that this issue is important enough for us to break that rule :-) What you are describing sounds similar to the multiprocessing module in CPython, which achieves the same goal using separated processes (and tons and tons of hacks), and requires the program to use a custom API. The advantage of doing it in PyPy rather than CPython is probably limited to the fact that it would be easier in PyPy (but still some work) to make sure that the multiple threads have no shared state. You still have to design some custom API. There is also another possible goal with more "pypy-like" goals and results, which would be to use some technique to "weave" a solution in the interpreter transparently for the Python programmer (so, a solution that works without requiring the Python programmer to learn another system than threads). I can imagine a Software Transactional Memory solution that would in theory work very nicely, but in practice have completely dreadful performance, because it would do large amounts of checked memory access for each bytecode. As far as I know it means that that approach does not work, but it may one day, if Hardware Transactional Memory really shows up and supports that scale. A bient?t, Armin. From jacob at openend.se Sun Apr 10 00:15:50 2011 From: jacob at openend.se (Jacob =?iso-8859-1?q?Hall=E9n?=) Date: Sun, 10 Apr 2011 00:15:50 +0200 Subject: [pypy-dev] Thoughts on multithreading in PyPy In-Reply-To: References: <201104092250.14600.jacob@openend.se> Message-ID: <201104100015.57221.jacob@openend.se> Saturday 09 April 2011 you wrote: > Hi Jacob, > > On Sat, Apr 9, 2011 at 10:50 PM, Jacob Hall?n wrote: > > So, in a second step, we provide for special data types that can be > > shared between threads. These would typically be allocated in > > non-movable memory, to avoid the complexity of garbage collection of > > memory with shared use. You can make simple fifo structures for > > communication between the threads and complex structures with advanced > > algorithms for dealing with shared access. > > That's where the real issue is. You can come up with some reasonable > API to communicate with other threads, but precisely, they will be > some API, which means that they will only work in programs written > specifically to use them. Designing a new API (at the level of the > Python language) is something we carefully avoided so far in PyPy; but > it's possible that this issue is important enough for us to break that > rule :-) > > What you are describing sounds similar to the multiprocessing module > in CPython, which achieves the same goal using separated processes > (and tons and tons of hacks), and requires the program to use a custom > API. The advantage of doing it in PyPy rather than CPython is > probably limited to the fact that it would be easier in PyPy (but > still some work) to make sure that the multiple threads have no shared > state. You still have to design some custom API. The multiprocessing API contains some classic primitives that could be kept. Lots of the rest seem way too complicated. This is because they are dealing with separate processes. I think you are downplaying the advantages of using PyPy. Apart from being easier and cleaner to implement, it would be using threads instead of processes, providing for much quicker communication and context switches between threads. Then it would be able to use the JIT, providing much better performance. You could also provide proxy object spaces to transparently spread load over multiple physical machines. Now, I don't think we should go ahead and start work on this now. I just like exploring the idea. If people come along wanting to do GIL removal, we can present them with a plan and set them off working. > There is also another possible goal with more "pypy-like" goals and > results, which would be to use some technique to "weave" a solution in > the interpreter transparently for the Python programmer (so, a > solution that works without requiring the Python programmer to learn > another system than threads). I can imagine a Software Transactional > Memory solution that would in theory work very nicely, but in practice > have completely dreadful performance, because it would do large > amounts of checked memory access for each bytecode. As far as I know > it means that that approach does not work, but it may one day, if > Hardware Transactional Memory really shows up and supports that scale. While being a very neat idea, it is still pie in the sky. My idea could be pie-on-plate, though it hasn't been baked yet. Jacob -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From dimaqq at gmail.com Sun Apr 10 05:02:01 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Sat, 9 Apr 2011 20:02:01 -0700 Subject: [pypy-dev] Thoughts on multithreading in PyPy In-Reply-To: <201104100015.57221.jacob@openend.se> References: <201104092250.14600.jacob@openend.se> <201104100015.57221.jacob@openend.se> Message-ID: IMO real zest and usefulness of multithreading are complex shared data structures. That said, most multithreaded programs share a very small, albeit critical subset of their data. What you propose is something akin to cpython multiprocessing or stackless with the addition of parallel execution of independent tasklets. I think stackless users would like that actually. Now when you get to the point of introducing some communication between processes, do you mean to pass only byte streams? primitive types? complex data structures? The last you cannot do as you have separate gc's, so you are limited to copyable data structures only, which is as useful as multiprocessing (concept, not module) and json/pickle messages. In short, a clone of multiprocessing would be useful, perhaps when numpy evolves to support user defined pypy functions well. It's a niche, not general approach though. d. On 9 April 2011 15:15, Jacob Hall?n wrote: > Saturday 09 April 2011 you wrote: >> Hi Jacob, >> >> On Sat, Apr 9, 2011 at 10:50 PM, Jacob Hall?n wrote: >> > So, in a second step, we provide for special data types that can be >> > shared between threads. These would typically be allocated in >> > non-movable memory, to avoid the complexity of garbage collection of >> > memory with shared use. You can make simple fifo structures for >> > communication between the threads and complex structures with advanced >> > algorithms for dealing with shared access. >> >> That's where the real issue is. ?You can come up with some reasonable >> API to communicate with other threads, but precisely, they will be >> some API, which means that they will only work in programs written >> specifically to use them. ?Designing a new API (at the level of the >> Python language) is something we carefully avoided so far in PyPy; but >> it's possible that this issue is important enough for us to break that >> rule :-) >> >> What you are describing sounds similar to the multiprocessing module >> in CPython, which achieves the same goal using separated processes >> (and tons and tons of hacks), and requires the program to use a custom >> API. ?The advantage of doing it in PyPy rather than CPython is >> probably limited to the fact that it would be easier in PyPy (but >> still some work) to make sure that the multiple threads have no shared >> state. ?You still have to design some custom API. > > The multiprocessing API contains some classic primitives that could be kept. > Lots of the rest seem way too complicated. This is because they are dealing > with separate processes. > > I think you are downplaying the advantages of using PyPy. Apart from being > easier and cleaner to implement, it would be using threads instead of > processes, providing for much quicker communication and context switches > between threads. Then it would be able to use the JIT, providing much better > performance. You could also provide proxy object spaces to transparently > spread load over multiple physical machines. > > Now, I don't think we should go ahead and start work on this now. I just like > exploring the idea. If people come along wanting to do GIL removal, we can > present them with a plan and set them off working. > >> There is also another possible goal with more "pypy-like" goals and >> results, which would be to use some technique to "weave" a solution in >> the interpreter transparently for the Python programmer (so, a >> solution that works without requiring the Python programmer to learn >> another system than threads). ?I can imagine a Software Transactional >> Memory solution that would in theory work very nicely, but in practice >> have completely dreadful performance, because it would do large >> amounts of checked memory access for each bytecode. ?As far as I know >> it means that that approach does not work, but it may one day, if >> Hardware Transactional Memory really shows up and supports that scale. > > While being a very neat idea, it is still pie in the sky. My idea could be > pie-on-plate, though it hasn't been baked yet. > > Jacob > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From pypy at pocketnix.org Sun Apr 10 13:33:40 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Sun, 10 Apr 2011 11:33:40 +0000 Subject: [pypy-dev] Thoughts on multithreading in PyPy In-Reply-To: <201104100015.57221.jacob@openend.se> References: <201104092250.14600.jacob@openend.se> <201104100015.57221.jacob@openend.se> Message-ID: <20110410113340.GB7395@pocketnix.org> What i am wondering about is if some of the base services provided by pypy can be moved into another thread, eg GC and JIT compilation and how much of a benfiit there would be to doing so at least with moving the gc to another thread i would think that doing so would provide some insight into concurrent access to python objects From matthew at woodcraft.me.uk Sun Apr 10 14:46:05 2011 From: matthew at woodcraft.me.uk (Matthew Woodcraft) Date: Sun, 10 Apr 2011 13:46:05 +0100 Subject: [pypy-dev] jit.backend.test.test_random Message-ID: <20110410124605.GB21912@golux.woodcraft.me.uk> I found that pypy.jit.backend.test.test_random is failing if I pass --backend=x86 The following patch fixes it for me: --- a/pypy/jit/backend/test/test_random.py +++ b/pypy/jit/backend/test/test_random.py @@ -717,6 +717,7 @@ def test_random_function(BuilderClass=OperationBuilder): r = Random() cpu = get_cpu() + cpu.setup_once() if pytest.config.option.repeat == -1: while 1: check_random_function(cpu, BuilderClass, r) -M- From exarkun at twistedmatrix.com Sun Apr 10 17:06:24 2011 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Sun, 10 Apr 2011 15:06:24 -0000 Subject: [pypy-dev] Thoughts on multithreading in PyPy In-Reply-To: References: <201104092250.14600.jacob@openend.se> <201104100015.57221.jacob@openend.se> Message-ID: <20110410150624.1992.1458428484.divmod.xquotient.365@localhost.localdomain> On 03:02 am, dimaqq at gmail.com wrote: >IMO real zest and usefulness of multithreading are complex shared data >structures. You forgot to say "immutable" ;) Jean-Paul From arigo at tunes.org Sun Apr 10 21:44:40 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 10 Apr 2011 21:44:40 +0200 Subject: [pypy-dev] Thoughts on multithreading in PyPy In-Reply-To: <20110410113340.GB7395@pocketnix.org> References: <201104092250.14600.jacob@openend.se> <201104100015.57221.jacob@openend.se> <20110410113340.GB7395@pocketnix.org> Message-ID: Hi, On Sun, Apr 10, 2011 at 1:33 PM, wrote: > What i am wondering about is if some of the base services provided by > pypy can be moved into another thread, eg GC and JIT compilation and > how much of a benfiit there would be to doing so It would be possible to move the GC or the JIT to another thread, introducing a lot of complexity but keeping it hidden to the Python programmer. You would end up with a program that can use maybe 1.2 core instead of just 1. That sounds like a lot of work for a minimal gain. A bient?t, Armin. From arigo at tunes.org Sun Apr 10 21:50:44 2011 From: arigo at tunes.org (Armin Rigo) Date: Sun, 10 Apr 2011 21:50:44 +0200 Subject: [pypy-dev] jit.backend.test.test_random In-Reply-To: <20110410124605.GB21912@golux.woodcraft.me.uk> References: <20110410124605.GB21912@golux.woodcraft.me.uk> Message-ID: Hi Matthew, On Sun, Apr 10, 2011 at 2:46 PM, Matthew Woodcraft wrote: > + ? ?cpu.setup_once() Thanks, fixed. We did not notice this failure because pypy/jit/backend/x86/test/test_zll_random.py provides its own interface over calling test_random.py directly with --backend=x86. A bient?t, Armin. From bea at changemaker.nu Sun Apr 10 22:59:26 2011 From: bea at changemaker.nu (Bea During) Date: Sun, 10 Apr 2011 22:59:26 +0200 Subject: [pypy-dev] Gothenburg sprint and sprint fund In-Reply-To: References: <20110408173451.GA6615@ubuntu> <201104091117.p39BHGsY006157@theraft.openend.se> <201104091152.p39BqDw1009245@theraft.openend.se> <201104091205.p39C5FPP010642@theraft.openend.se> <201104091238.p39Ccr2J013284@theraft.openend.se> Message-ID: <4DA21A2E.5090506@changemaker.nu> Hi there Bartosz SKOWRON skrev 2011-04-09 14:42: > On Sat, Apr 9, 2011 at 2:38 PM, Laura Creighton wrote: > >> That's really too bad. Are you going to Europython? > Yes, I'm going to come for the conference and the PyPy sprint. > At least I will try to come ;-) > _______________________________________________ Just a reminder that this sprint related thread of discussion fits better on the mailinglist we have specifically for these purposes: pypy-sprint at codespeak.net. Cheers Bea From fijall at gmail.com Mon Apr 11 02:01:42 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 11 Apr 2011 02:01:42 +0200 Subject: [pypy-dev] An interesting article on chip design Message-ID: For those interested in hardware/assembler. http://www.lighterra.com/papers/modernmicroprocessors/ It's a good read and fills some of our gaps :) Cheers, fijal From fijall at gmail.com Mon Apr 11 11:53:58 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 11 Apr 2011 11:53:58 +0200 Subject: [pypy-dev] Waf benchmark Message-ID: Hello. I propose the waf benchmark removal. Originally, the idea was that we're slower than CPython for no good reason. Now that this benchmark measures some obscure piece of stdlib time (subprocesses) I don't think it's that necessary. Besides: * the variation between runs is too big, so we don't care * noone was ever remotely interested in speeding this up any opinions? Cheers, fijal From pypy at pocketnix.org Mon Apr 11 13:44:21 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Mon, 11 Apr 2011 11:44:21 +0000 Subject: [pypy-dev] translationmodules option failing Message-ID: <20110411114421.GC7395@pocketnix.org> Hi just tried to bootstrap myself in the quickest way possible via the --translationmodules option (cmdline below) and encountered an issue with the md5 module which appears to be renamed _md5. the patch below corrects this. while there was an md5 directory present in my tree it only contained untracked pyc files and as such i have also removed this ------------------------------------- ./translate.py -O jit --thread --make-jobs=9 ./targetpypystandalone.py --translationmodules ------------------------------------- --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -39,7 +39,7 @@ translation_modules = default_modules.copy() translation_modules.update(dict.fromkeys( ["fcntl", "rctime", "select", "signal", "_rawffi", "zlib", - "struct", "md5", "cStringIO", "array"])) + "struct", "_md5", "cStringIO", "array"])) ------------------------------------- From pypy at pocketnix.org Mon Apr 11 14:17:39 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Mon, 11 Apr 2011 12:17:39 +0000 Subject: [pypy-dev] benchunix.py fixes Message-ID: <20110411121739.GD7395@pocketnix.org> Hi bench-unix.py did not work so i did q quick repair job and enhancement on a small bit of the code, included as 3 patches below. i seem to recall someone mentioning that this was depreciated however as a new user it was tempting to use it as it was the first thing i saw and is a nice quick test for getting a rough guide of the relative performance of several versions of pypy i have generated currently contemplating a rewrite of the code so let me know if there is something better or if someone else is working on something ---------------------------------------------------- Add newer python implementations and benchmark system default interpreter --- a/pypy/translator/goal/bench-unix.py +++ b/pypy/translator/goal/bench-unix.py @@ -102,7 +102,7 @@ ref_rich, ref_stone = None, None # for exe in '/usr/local/bin/python2.5 python2.4 python2.3'.split(): - for exe in 'python2.4 python2.3'.split(): + for exe in 'python2.7 python2.6 python2.4 python2.3 python'.split(): v = os.popen(exe + ' -c "import sys;print sys.version.split()[0]"').rea if not v: continue ------------------------------------------------------------------------------ Discard missing interpreters --- a/pypy/translator/goal/bench-unix.py +++ b/pypy/translator/goal/bench-unix.py @@ -102,7 +102,13 @@ ref_rich, ref_stone = None, None # for exe in '/usr/local/bin/python2.5 python2.4 python2.3'.split(): - for exe in 'python2.4 python2.3'.split(): + for exe in 'python2.7 python2.6 python2.4 python2.3 python'.split(): + path = os.environ.get("PATH", "") + path = [x + os.sep + exe for x in path.split(os.pathsep) if exe in os.l + if len(path) > 0: + exe = path[0] + else: + continue v = os.popen(exe + ' -c "import sys;print sys.version.split()[0]"').rea if not v: continue ------------------------------------------------------------------------------ Fix off by one --- a/pypy/translator/goal/bench-unix.py +++ b/pypy/translator/goal/bench-unix.py @@ -85,7 +85,7 @@ if os.path.isdir(exe) or exe.endswith('.jar'): continue try: - exes.append( (exe.split('-')[2], exe) ) + exes.append( (exe.split('-')[1], exe) ) except: pass #skip filenames without version number exes.sort() ------------------------------------------------------------------------------ From pypy at pocketnix.org Mon Apr 11 14:45:41 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Mon, 11 Apr 2011 12:45:41 +0000 Subject: [pypy-dev] File overwriting (--output flag to translate.py) Message-ID: <20110411124541.GE7395@pocketnix.org> Hi again i have been compiling a bunch of different pypy instances with different levels of optimizations and features and found that if i run pypy-c from the current directory and dont specify a new output filename it will attempt to and fail to overwrite pypy-c due to the file being "in use". unfortunately the exception generated is in the shutil lib from mem and the error message/exception does not give away immediately what the cause is which can lead to some frustration on some of the longer compiles ;) its quick and dirty and i don't mind if it gets changed at all. not 100% sure what the correct way to report an error and abandon is other than what everything else does (uncaught exception to a pdb shell) --- a/pypy/translator/goal/translate.py +++ b/pypy/translator/goal/translate.py @@ -285,6 +285,10 @@ elif drv.exe_name is None and '__name__' in targetspec_dic: drv.exe_name = targetspec_dic['__name__'] + '-%(backend)s' + # Ensure the file does not exisit else we fail at end of translation + if os.path.isfile(drv.exe_name): + raise ValueError('File "' + drv.exe_name+ '" already exisits (--output)') + goals = translateconfig.goals try: drv.proceed(goals) From me at samuelreis.de Mon Apr 11 14:38:44 2011 From: me at samuelreis.de (Samuel Reis) Date: Mon, 11 Apr 2011 14:38:44 +0200 Subject: [pypy-dev] Licensing of PyPy Speed Logo Artwork Message-ID: <5840850C-21C9-4356-A3DD-9992124AD0CC@samuelreis.de> Hello everybody. As fijal asked me to do, I will now clarify the licensing of the PyPy Logo Artwork that was provided by me and is currently in use at speed.pypy.org. Herewith I declare this artwork to be subject to the terms of the Creative Commons Attribution-Share Alike license (http://creativecommons.org/licenses/by-sa/3.0/). I view the aspect of attribution to be fulfilled by the mention of my name in the LICENSE file of the PyPy source distribution. I wish you guys good progress and success, so that we all can see a compliant Python that is gaining more and more speed. Greetings, Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From bea at changemaker.nu Mon Apr 11 14:47:28 2011 From: bea at changemaker.nu (Bea During) Date: Mon, 11 Apr 2011 14:47:28 +0200 Subject: [pypy-dev] Pre sprint beer session Sunday 24th of April Message-ID: <4DA2F860.6030606@changemaker.nu> ( emailing this list since I was not allowed to post to pypy-sprint and people don?t seem to use it?) Hi there Since I am travelling to Japan on the 25th of April I will miss the Gothenburg sprint (that was not the original plan but my family had to move our trip because of what happened there). I still would like to meet up so I invite you to a pre sprint beer session at Bishops Arms pub in central Gothenburg, Sunday 24th of April, starting from 18:00. The Bishop Arms pub is a very nice, quiet pub (library inspired but cosy) and they have a good selection of beers and whiskey. Here is a map: http://www.bishopsarms.com/Goteborg_Park/Presentation This way I get to meet most of you - and drink beer! Talk about eating the cake and having it ;-) Cheers Bea p.s. Anto - will miss you there d.s From lac at openend.se Mon Apr 11 15:40:11 2011 From: lac at openend.se (Laura Creighton) Date: Mon, 11 Apr 2011 15:40:11 +0200 Subject: [pypy-dev] thinking about the EuroPython sprint Message-ID: <201104111340.p3BDeB7t019802@theraft.openend.se> 2 days after the conference is not a lot of time. Do we want to rent some space to have a longer sprint? Or is it too late, people will already have booked their plane tickets, or ... Laura From anto.cuni at gmail.com Mon Apr 11 20:27:37 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 11 Apr 2011 20:27:37 +0200 Subject: [pypy-dev] Pre sprint beer session Sunday 24th of April In-Reply-To: <4DA2F860.6030606@changemaker.nu> References: <4DA2F860.6030606@changemaker.nu> Message-ID: <4DA34819.1080007@gmail.com> On 11/04/11 14:47, Bea During wrote: > I still would like to meet up so I invite you to a pre sprint beer > session at Bishops Arms pub in central Gothenburg, Sunday 24th of April, > starting from 18:00. [cut] > p.s. Anto - will miss you there d.s yeah, unfortunately I won't be able to make it :-( Have fun! ciao, Anto From anto.cuni at gmail.com Mon Apr 11 20:34:42 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Mon, 11 Apr 2011 20:34:42 +0200 Subject: [pypy-dev] The JVM backend and Jython In-Reply-To: References: Message-ID: <4DA349C2.3080403@gmail.com> Hi Frank, On 30/03/11 04:40, fwierzbicki at gmail.com wrote: [cut] > So to my question - just how broken is the JVM backend? Are there > workarounds that would allow the Java code to get generated? so, now the jvm (and cli) translation works again. You can just type ./translate.py -b jvm, and the fish the relevant .class/.j files from /tmp/usession-default-*/pypy. ciao, Anto From matthew at woodcraft.me.uk Mon Apr 11 20:52:28 2011 From: matthew at woodcraft.me.uk (Matthew Woodcraft) Date: Mon, 11 Apr 2011 19:52:28 +0100 Subject: [pypy-dev] An interesting article on chip design In-Reply-To: References: Message-ID: On 2011-04-11 01:01, Maciej Fijalkowski wrote: > For those interested in hardware/assembler. > > http://www.lighterra.com/papers/modernmicroprocessors/ For specifics of x86 processors, http://www.agner.org/optimize/ is a good source of information (particularly microarchitecture.pdf and optimizing_assembly.pdf). -M- From arigo at tunes.org Mon Apr 11 21:16:30 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 11 Apr 2011 21:16:30 +0200 Subject: [pypy-dev] Waf benchmark In-Reply-To: References: Message-ID: Hi Maciej, On Mon, Apr 11, 2011 at 11:53 AM, Maciej Fijalkowski wrote: > I propose the waf benchmark removal. Given that its speed is at "1 for CPython, 1 for PyPy without JIT, 1 for PyPy with JIT", it seems rather pointless indeed, at least for us. Armin From arigo at tunes.org Mon Apr 11 21:19:22 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 11 Apr 2011 21:19:22 +0200 Subject: [pypy-dev] translationmodules option failing In-Reply-To: <20110411114421.GC7395@pocketnix.org> References: <20110411114421.GC7395@pocketnix.org> Message-ID: Hi, On Mon, Apr 11, 2011 at 1:44 PM, wrote: > - ? ? "struct", "md5", "cStringIO", "array"])) > + ? ? "struct", "_md5", "cStringIO", "array"])) Thanks! Applied. Armin From arigo at tunes.org Mon Apr 11 21:27:09 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 11 Apr 2011 21:27:09 +0200 Subject: [pypy-dev] File overwriting (--output flag to translate.py) In-Reply-To: <20110411124541.GE7395@pocketnix.org> References: <20110411124541.GE7395@pocketnix.org> Message-ID: Re-Hi, On Mon, Apr 11, 2011 at 2:45 PM, wrote: > + ? ? ? ?# Ensure the file does not exisit else we fail at end of translation > + ? ? ? ?if os.path.isfile(drv.exe_name): > + ? ? ? ? ? ?raise ValueError('File "' + drv.exe_name+ '" already exisits (--output)') Sadly everyone so far has his own additional hacks to categorize multiple translated versions. Mine is to ignore the pypy-c entirely and copy the executable from the /tmp, after it has been produced there. I also copy the C sources (but not the other files produced by gcc). Anyway, my point is that the particular change you are proposing would actually harm me, because I always have a pypy-c and I'm fine if it gets overwritten by every translation :-) We need to think of some better solution... Armin From arigo at tunes.org Mon Apr 11 21:44:07 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 11 Apr 2011 21:44:07 +0200 Subject: [pypy-dev] Pre sprint beer session Sunday 24th of April In-Reply-To: <4DA34819.1080007@gmail.com> References: <4DA2F860.6030606@changemaker.nu> <4DA34819.1080007@gmail.com> Message-ID: Hi Bea, On Mon, Apr 11, 2011 at 8:27 PM, Antonio Cuni wrote: >> session at Bishops Arms pub in central Gothenburg, Sunday 24th of April, Good, it will be nice to see you again Bea! Armin From arigo at tunes.org Mon Apr 11 21:46:25 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 11 Apr 2011 21:46:25 +0200 Subject: [pypy-dev] thinking about the EuroPython sprint In-Reply-To: <201104111340.p3BDeB7t019802@theraft.openend.se> References: <201104111340.p3BDeB7t019802@theraft.openend.se> Message-ID: Hi Laura, On Mon, Apr 11, 2011 at 3:40 PM, Laura Creighton wrote: > 2 days after the conference is not a lot of time. ?Do we want to rent some > space to have a longer sprint? ?Or is it too late, people will already > have booked their plane tickets, or ... As far as I'm concerned, I think a sprint should be a bit longer to be useful. Wasn't there also talk about having the sprint (or *a* sprint) done before the conference? Armin From stefan_ml at behnel.de Mon Apr 11 23:02:32 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 11 Apr 2011 23:02:32 +0200 Subject: [pypy-dev] Waf benchmark In-Reply-To: References: Message-ID: Maciej Fijalkowski, 11.04.2011 11:53: > I propose the waf benchmark removal. > > Originally, the idea was that we're slower than CPython for no good > reason. Now that this benchmark measures some obscure piece of stdlib > time (subprocesses) I don't think it's that necessary. > > Besides: > > * the variation between runs is too big, so we don't care > * noone was ever remotely interested in speeding this up > > any opinions? Despite the relatively large variations, Cython runs this benchmark persistently ~1/3 faster than CPython 2.7 for me - minus the currently missing support for "__file__", which is used at build time here. So my vote would be to leave it in, maybe someone has an incentive to speed this up once you have bars up for Cython. :) Stefan From alex.gaynor at gmail.com Mon Apr 11 23:04:56 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 11 Apr 2011 17:04:56 -0400 Subject: [pypy-dev] Waf benchmark In-Reply-To: References: Message-ID: On Mon, Apr 11, 2011 at 5:02 PM, Stefan Behnel wrote: > Maciej Fijalkowski, 11.04.2011 11:53: > > I propose the waf benchmark removal. > > > > Originally, the idea was that we're slower than CPython for no good > > reason. Now that this benchmark measures some obscure piece of stdlib > > time (subprocesses) I don't think it's that necessary. > > > > Besides: > > > > * the variation between runs is too big, so we don't care > > * noone was ever remotely interested in speeding this up > > > > any opinions? > > Despite the relatively large variations, Cython runs this benchmark > persistently ~1/3 faster than CPython 2.7 for me - minus the currently > missing support for "__file__", which is used at build time here. So my > vote would be to leave it in, maybe someone has an incentive to speed this > up once you have bars up for Cython. :) > > Stefan > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > Personally I'd be happier if it was a bit more of a microbenchmark, it's apparently a macro-benchmark, of subprocess ATM, which makes no sense really :) Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From lac at openend.se Tue Apr 12 00:13:13 2011 From: lac at openend.se (Laura Creighton) Date: Tue, 12 Apr 2011 00:13:13 +0200 Subject: [pypy-dev] thinking about the EuroPython sprint In-Reply-To: Message from Armin Rigo of "Mon, 11 Apr 2011 21:46:25 +0200." References: <201104111340.p3BDeB7t019802@theraft.openend.se> Message-ID: <201104112213.p3BMDDEJ002041@theraft.openend.se> In a message of Mon, 11 Apr 2011 21:46:25 +0200, Armin Rigo writes: >Hi Laura, > >On Mon, Apr 11, 2011 at 3:40 PM, Laura Creighton wrote: >> 2 days after the conference is not a lot of time. =A0Do we want to rent > s= >ome >> space to have a longer sprint? =A0Or is it too late, people will alread >y >> have booked their plane tickets, or ... > >As far as I'm concerned, I think a sprint should be a bit longer to be >useful. Wasn't there also talk about having the sprint (or *a* >sprint) done before the conference? yes, I was hoping to catch Alex Gaynor for that, as he is attending EuroDjango earlier the same month. My point is that we need to decide what we are doing very, very soon. else everybody will have their plane tickets already, since we will have to announce what we are doing differently. Laura > > >Armin From arigo at tunes.org Tue Apr 12 08:52:13 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 12 Apr 2011 08:52:13 +0200 Subject: [pypy-dev] thinking about the EuroPython sprint In-Reply-To: <201104112213.p3BMDDEJ002041@theraft.openend.se> References: <201104111340.p3BDeB7t019802@theraft.openend.se> <201104112213.p3BMDDEJ002041@theraft.openend.se> Message-ID: Hi Laura, On Tue, Apr 12, 2011 at 12:13 AM, Laura Creighton wrote: > yes, I was hoping to catch Alex Gaynor for that, as he is attending > EuroDjango earlier the same month. ? My point is that we need to decide > what we are doing very, very soon. ?else everybody will have their > plane tickets already, since we will have to announce what we are > doing differently. Then what about a short week's sprint before the conference, plus the two days after? Armin From david at silveregg.co.jp Tue Apr 12 08:52:49 2011 From: david at silveregg.co.jp (David) Date: Tue, 12 Apr 2011 15:52:49 +0900 Subject: [pypy-dev] Waf benchmark In-Reply-To: References: Message-ID: <4DA3F6C1.8070601@silveregg.co.jp> On 04/12/2011 06:02 AM, Stefan Behnel wrote: > Maciej Fijalkowski, 11.04.2011 11:53: >> I propose the waf benchmark removal. >> >> Originally, the idea was that we're slower than CPython for no good >> reason. Now that this benchmark measures some obscure piece of stdlib >> time (subprocesses) I don't think it's that necessary. >> >> Besides: >> >> * the variation between runs is too big, so we don't care >> * noone was ever remotely interested in speeding this up >> >> any opinions? > > Despite the relatively large variations, Cython runs this benchmark > persistently ~1/3 faster than CPython 2.7 for me - minus the currently > missing support for "__file__", which is used at build time here. So my > vote would be to leave it in, maybe someone has an incentive to speed this > up once you have bars up for Cython. :) I could look at the waf script used for the benchmark so that it depends more on python performances. One thing which is annoying in all the python build tools (waf, scons, etc...) is that when you use multiple builds with python-based build functions, it is slower than with a single job. That's a bottleneck for the waf-based build of numpy, at least. cheers, David From anto.cuni at gmail.com Tue Apr 12 09:27:07 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Tue, 12 Apr 2011 09:27:07 +0200 Subject: [pypy-dev] thinking about the EuroPython sprint In-Reply-To: References: <201104111340.p3BDeB7t019802@theraft.openend.se> <201104112213.p3BMDDEJ002041@theraft.openend.se> Message-ID: <4DA3FECB.3080702@gmail.com> On 12/04/11 08:52, Armin Rigo wrote: > Then what about a short week's sprint before the conference, plus the > two days after? if people are interested, I might try to organize a sprint in Genova or surrounding (which is ~3h away from florence by train). However, I cannot assure that I'll be able to find a suitable place, because I see only two options: - ask the uni (but there are not really many rooms suitable for a sprint -- just one or two, and I think it'll be hard to reserve them for a whole week) - find a place "a la leysin", i.e. a hotel which gives also us a room for the sprint. However, middle of june is already vacation time here, so it is possible that hotels are full anyway and prefer to have "normal" visitors instead of nerds which stay there all the time :-) ciao, Anto From pypy at pocketnix.org Tue Apr 12 10:57:52 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Tue, 12 Apr 2011 08:57:52 +0000 Subject: [pypy-dev] File overwriting (--output flag to translate.py) In-Reply-To: References: <20110411124541.GE7395@pocketnix.org> Message-ID: <20110412085751.GF7395@pocketnix.org> On Mon, Apr 11, 2011 at 09:27:09PM +0200, Armin Rigo wrote: > > Sadly everyone so far has his own additional hacks to categorize > multiple translated versions. Mine is to ignore the pypy-c entirely > and copy the executable from the /tmp, after it has been produced > there. I also copy the C sources (but not the other files produced by > gcc). Anyway, my point is that the particular change you are > proposing would actually harm me, because I always have a pypy-c and > I'm fine if it gets overwritten by every translation :-) > > We need to think of some better solution... > > Armin here is an updated version that specifically checks if the destination specified by the --output flag is the same as the running interpreter, this matches the error i was having more closely and should hopefully not interfere with other users work flows of course i am assuming here that you are not relying on the pdb shell as a notification to copy the file over, if you do feel free to ignore this updated patch - Da_Blitz ------------------------------------------------------------- Double check to ensure we are not overwriting the current interpreter --- a/pypy/translator/goal/translate.py +++ b/pypy/translator/goal/translate.py @@ -285,6 +285,10 @@ elif drv.exe_name is None and '__name__' in targetspec_dic: drv.exe_name = targetspec_dic['__name__'] + '-%(backend)s' + # Double check to ensure we are not overwriting the current interpreter + if os.path.realpath(drv.exe_name) == sys.executable: + raise ValueError('File "' + drv.exe_name+ '" already exisits (--output)') + goals = translateconfig.goals try: drv.proceed(goals) From cfbolz at gmx.de Tue Apr 12 11:36:08 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 12 Apr 2011 11:36:08 +0200 Subject: [pypy-dev] File overwriting (--output flag to translate.py) In-Reply-To: <20110411124541.GE7395@pocketnix.org> References: <20110411124541.GE7395@pocketnix.org> Message-ID: <4DA41D08.80803@gmx.de> On 04/11/2011 02:45 PM, pypy at pocketnix.org wrote: > Hi again > > i have been compiling a bunch of different pypy instances with different > levels of optimizations and features and found that if i run pypy-c > from the current directory and dont specify a new output filename it > will attempt to and fail to overwrite pypy-c due to the file being "in > use". unfortunately the exception generated is in the shutil lib from > mem and the error message/exception does not give away immediately what > the cause is which can lead to some frustration on some of the longer > compiles ;) Even if the copy fails, you can always fish the executable from the temp directory, under the name $TEMPDIR/usession-*/testing_1/pypy-c so you don't lose the previously translated executable. Carl Friedrich From pypy at pocketnix.org Tue Apr 12 12:34:50 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Tue, 12 Apr 2011 10:34:50 +0000 Subject: [pypy-dev] translationmodules option failing In-Reply-To: References: <20110411114421.GC7395@pocketnix.org> Message-ID: <20110412103450.GG7395@pocketnix.org> On Mon, Apr 11, 2011 at 09:19:22PM +0200, Armin Rigo wrote: > Hi, > > On Mon, Apr 11, 2011 at 1:44 PM, wrote: > > - ? ? "struct", "md5", "cStringIO", "array"])) > > + ? ? "struct", "_md5", "cStringIO", "array"])) > > Thanks! Applied. > > Armin seems i missed the ctypes dependence on the threading module (i always passed --thread to translation.py) while the first pypy-c would compile correctly, attempting to translate a second pypy interpreter would fail with an exception related to ctypes with the patch below executing "./pypy-c ./translate.py -O jit ./targetpypystandalone.py --translationmodules" will work without passing any more arguments in case anyone is interested i included the last few lines of the traceback to the end of the email from the second run that fails - ? ? "struct", "_md5", "cStringIO", "array"])) + ? ? "struct", "_md5", "cStringIO", "array", "thread"])) ------------------------------ File "/home/dablitz/code/pypy/pypy/rpython/lltypesystem/ll2ctypes.py", line 247, in get_ctypes_type cls = build_new_ctypes_type(T, delayed_builders) File "/home/dablitz/code/pypy/pypy/rpython/lltypesystem/ll2ctypes.py", line 284, in build_new_ctypes_type _setup_ctypes_cache() File "/home/dablitz/code/pypy/pypy/rpython/lltypesystem/ll2ctypes.py", line 92, in _setup_ctypes_cache lltype.Signed: ctypes.c_long, AttributeError: 'NoneType' object has no attribute 'c_long' From arigo at tunes.org Tue Apr 12 12:40:11 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 12 Apr 2011 12:40:11 +0200 Subject: [pypy-dev] translationmodules option failing In-Reply-To: <20110412103450.GG7395@pocketnix.org> References: <20110411114421.GC7395@pocketnix.org> <20110412103450.GG7395@pocketnix.org> Message-ID: Re-hi, On Tue, Apr 12, 2011 at 12:34 PM, wrote: > - ? ? "struct", "_md5", "cStringIO", "array"])) > + ? ? "struct", "_md5", "cStringIO", "array", "thread"])) Uh, we can't import ctypes if we have no thread module? That looks obscure and should be fixed... Armin From arigo at tunes.org Tue Apr 12 21:07:36 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 12 Apr 2011 21:07:36 +0200 Subject: [pypy-dev] translationmodules option failing In-Reply-To: References: <20110411114421.GC7395@pocketnix.org> <20110412103450.GG7395@pocketnix.org> Message-ID: Hi, On Tue, Apr 12, 2011 at 12:40 PM, Armin Rigo wrote: > Uh, we can't import ctypes if we have no thread module? ?That looks > obscure and should be fixed... Fixed. Now a "--translationmodules" translation should, as originally planned, have no threads but ctypes should import on it anyway -- as well hopefully as all other modules needed by translate.py; feel free to report if some are still missing. (Note that the fix is purely app-level code, which means that you don't need to retranslate to see the benefit.) A bient?t, Armin. From anto.cuni at gmail.com Wed Apr 13 12:20:26 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 13 Apr 2011 12:20:26 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython Message-ID: <4DA578EA.2080108@gmail.com> Hi all, as we were discussing yesterday in another thread, the post-europython sprint will be only two days long, and so we might want to have a longer one either before or after europython. I am considering organizing one in my place. It could be either in Genova or more preferably in some other town in the nearby of the italian riviera. The place is ~3hr away from Florence by train. Googling images for "pegli" or "arenzano" (i.e., two of the towns we could go to) shows pictures which (I think) should encourage people to come :-) http://tinyurl.com/655p2at http://tinyurl.com/67q67r3 My idea is to find a hotel that gives us a sprinting room and internet in exchange of N people sleeping there, as we do in leysin. But before asking them, I need to have a rough idea about which value of "N" we are talking about (of course the higher the more interesting for them it is). Thus, I ask everybody who is potentially/likely interested in coming to let me know. Also, would you prefer to do it before or after europython? ciao, Anto From fijall at gmail.com Wed Apr 13 12:27:18 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 13 Apr 2011 12:27:18 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython In-Reply-To: <4DA578EA.2080108@gmail.com> References: <4DA578EA.2080108@gmail.com> Message-ID: On Wed, Apr 13, 2011 at 12:20 PM, Antonio Cuni wrote: > Hi all, > > as we were discussing yesterday in another thread, the post-europython sprint > will be only two days long, and so we might want to have a longer one either > before or after europython. > > I am considering organizing one in my place. ?It could be either in Genova or > more preferably in some other town in the nearby of the italian riviera. > The place is ~3hr away from Florence by train. > > Googling images for "pegli" or "arenzano" (i.e., two of the towns we could go > to) shows pictures which (I think) should encourage people to come :-) > http://tinyurl.com/655p2at > http://tinyurl.com/67q67r3 > > My idea is to find a hotel that gives us a sprinting room and internet in > exchange of N people sleeping there, as we do in leysin. > > But before asking them, I need to have a rough idea about which value of "N" > we are talking about (of course the higher the more interesting for them it is). > > Thus, I ask everybody who is potentially/likely interested in coming to let me > know. > > Also, would you prefer to do it before or after europython? > > ciao, > Anto > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > Not that I can't google myself, but those links are broken From anto.cuni at gmail.com Wed Apr 13 12:29:40 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 13 Apr 2011 12:29:40 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython In-Reply-To: References: <4DA578EA.2080108@gmail.com> Message-ID: <4DA57B14.6030806@gmail.com> On 13/04/11 12:27, Maciej Fijalkowski wrote: > Not that I can't google myself, but those links are broken uhm, indeed. These seems to work :-) http://bit.ly/e6vHkh http://bit.ly/fkHwMu From lac at openend.se Wed Apr 13 14:21:50 2011 From: lac at openend.se (Laura Creighton) Date: Wed, 13 Apr 2011 14:21:50 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython In-Reply-To: Message from Antonio Cuni of "Wed, 13 Apr 2011 12:20:26 +0200." <4DA578EA.2080108@gmail.com> References: <4DA578EA.2080108@gmail.com> Message-ID: <201104131221.p3DCLpg4021192@theraft.openend.se> In a message of Wed, 13 Apr 2011 12:20:26 +0200, Antonio Cuni writes: >Also, would you prefer to do it before or after europython? I am coming, and both times are fine for me. I am going to be spending the week after Europython in Italy somewhere anyway, prior to going kayaking in Sicily. Laura (who is speaking for Jacob here too) >ciao, >Anto From jacob at openend.se Wed Apr 13 14:53:26 2011 From: jacob at openend.se (Jacob =?iso-8859-1?q?Hall=E9n?=) Date: Wed, 13 Apr 2011 14:53:26 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython In-Reply-To: <201104131221.p3DCLpg4021192@theraft.openend.se> References: <4DA578EA.2080108@gmail.com> <201104131221.p3DCLpg4021192@theraft.openend.se> Message-ID: <201104131453.26813.jacob@openend.se> Wednesday 13 April 2011 you wrote: > In a message of Wed, 13 Apr 2011 12:20:26 +0200, Antonio Cuni writes: > >Also, would you prefer to do it before or after europython? > > I am coming, and both times are fine for me. I am going to be > spending the week after Europython in Italy somewhere anyway, prior > to going kayaking in Sicily. Corsica, not Sicily. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From anto.cuni at gmail.com Wed Apr 13 15:14:20 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Wed, 13 Apr 2011 15:14:20 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython In-Reply-To: <201104131453.26813.jacob@openend.se> References: <4DA578EA.2080108@gmail.com> <201104131221.p3DCLpg4021192@theraft.openend.se> <201104131453.26813.jacob@openend.se> Message-ID: <4DA5A1AC.6050305@gmail.com> On 13/04/11 14:53, Jacob Hall?n wrote: > Wednesday 13 April 2011 you wrote: >> In a message of Wed, 13 Apr 2011 12:20:26 +0200, Antonio Cuni writes: >>> Also, would you prefer to do it before or after europython? >> >> I am coming, and both times are fine for me. I am going to be >> spending the week after Europython in Italy somewhere anyway, prior >> to going kayaking in Sicily. > > Corsica, not Sicily. uhm, how do you go there? If by boat, it's very likely that you'll start by genova or savona (which is also close, ~50 km) From jacob at openend.se Wed Apr 13 16:42:04 2011 From: jacob at openend.se (Jacob =?iso-8859-1?q?Hall=E9n?=) Date: Wed, 13 Apr 2011 16:42:04 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython In-Reply-To: <4DA5A1AC.6050305@gmail.com> References: <4DA578EA.2080108@gmail.com> <201104131453.26813.jacob@openend.se> <4DA5A1AC.6050305@gmail.com> Message-ID: <201104131642.04792.jacob@openend.se> Wednesday 13 April 2011 you wrote: > On 13/04/11 14:53, Jacob Hall?n wrote: > > Wednesday 13 April 2011 you wrote: > >> In a message of Wed, 13 Apr 2011 12:20:26 +0200, Antonio Cuni writes: > >>> Also, would you prefer to do it before or after europython? > >> > >> I am coming, and both times are fine for me. I am going to be > >> spending the week after Europython in Italy somewhere anyway, prior > >> to going kayaking in Sicily. > > > > Corsica, not Sicily. > > uhm, how do you go there? If by boat, it's very likely that you'll start by > genova or savona (which is also close, ~50 km) We have looked at leaving from Livorno by boat, but there are other alternatives. Flying seems to out of the question, shortest way seems to be to go to Geneva to change planes. Jacob -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From lac at openend.se Wed Apr 13 19:32:03 2011 From: lac at openend.se (Laura Creighton) Date: Wed, 13 Apr 2011 19:32:03 +0200 Subject: [pypy-dev] This in from Jesse Noller Message-ID: <201104131732.p3DHW3X1017075@theraft.openend.se> ------- Forwarded Message Return-Path: jnoller at gmail.com Delivery-Date: Wed Apr 13 18:54:12 2011 Subject: Re: Mentor for Py3 benchmarking To: the-fellowship-of-the-packaging at googlegroups.com, Maciej Fijalkowski , Laura Creighton Cc: Arc Riley Content-Type: text/plain; charset=ISO-8859-1 Also, reach out to the PyPy team. They know more about speed.pypy.org than anyone else, and would be best suited to mentor. Python-Dev and PyPy-Dev are going to be the best avenues. On Tue, Apr 12, 2011 at 11:33 AM, Arc Riley wrote: > We have a number of students who proposed to port PyPy's benchmarking sui= te > to Python3 to run on speed.python.org, we don't have a mentor for these a= t > the moment. > > Would anyone here (pref previous GSoC mentor/student) like to take one of > these on? > > We have until Monday (4/18) to evaluate students, get patches/blogs/etc > taken care of, and all mentors assigned. If there are people here who want > to mentor talk to either Tarek (for packaging) or Martin v. L?wis (for > python-core).If you're an existing python-dev contributor we could > especially use your help. > - ------- End of Forwarded Message From hakan at debian.org Thu Apr 14 14:53:19 2011 From: hakan at debian.org (Hakan Ardo) Date: Thu, 14 Apr 2011 14:53:19 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: port test_intbound_addsub_ge to test_pypy_c_new In-Reply-To: <20110414124402.CD7472A204D@codespeak.net> References: <20110414124402.CD7472A204D@codespeak.net> Message-ID: On Thu, Apr 14, 2011 at 2:44 PM, antocuni wrote: > Author: Antonio Cuni > Branch: > Changeset: r43345:fd3f23ae8324 > Date: 2011-04-14 14:42 +0200 > http://bitbucket.org/pypy/pypy/changeset/fd3f23ae8324/ > > Log: ? ?port test_intbound_addsub_ge to test_pypy_c_new > > diff --git a/pypy/module/pypyjit/test_pypy_c/test_pypy_c_new.py b/pypy/module/pypyjit/test_pypy_c/test_pypy_c_new.py > --- a/pypy/module/pypyjit/test_pypy_c/test_pypy_c_new.py > +++ b/pypy/module/pypyjit/test_pypy_c/test_pypy_c_new.py > @@ -1194,3 +1194,33 @@ > ? ? ? ? ? ? --TICK-- > ? ? ? ? ? ? jump(p0, p1, p2, p3, i11, i13, descr=) > ? ? ? ? """) > + > + ? ?def test_intbound_addsub_ge(self): > + ? ? ? ?def main(n): > + ? ? ? ? ? ?i, a, b = 0, 0, 0 > + ? ? ? ? ? ?while i < n: > + ? ? ? ? ? ? ? ?if i + 5 >= 5: > + ? ? ? ? ? ? ? ? ? ?a += 1 > + ? ? ? ? ? ? ? ?if i - 1 >= -1: > + ? ? ? ? ? ? ? ? ? ?b += 1 > + ? ? ? ? ? ? ? ?i += 1 > + ? ? ? ? ? ?return (a, b) > + ? ? ? ?# > + ? ? ? ?log = self.run(main, [300], threshold=200) > + ? ? ? ?assert log.result == (300, 300) > + ? ? ? ?loop, = log.loops_by_filename(self.filepath) > + ? ? ? ?assert loop.match(""" > + ? ? ? ? ? ?i10 = int_lt(i8, i9) > + ? ? ? ? ? ?guard_true(i10, descr=...) > + ? ? ? ?# XXX: why do we need ovf check here? If we put a literal "300" > + ? ? ? ?# instead of "n", it disappears With n==sys.maxint, the operation i+5 would be the one overflowing. -- H?kan Ard? From anto.cuni at gmail.com Thu Apr 14 15:01:17 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 14 Apr 2011 15:01:17 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: port test_intbound_addsub_ge to test_pypy_c_new In-Reply-To: References: <20110414124402.CD7472A204D@codespeak.net> Message-ID: <4DA6F01D.8020505@gmail.com> Hi Hakan, On 14/04/11 14:53, Hakan Ardo wrote: >> + def test_intbound_addsub_ge(self): >> + def main(n): >> + i, a, b = 0, 0, 0 >> + while i < n: >> + if i + 5 >= 5: >> + a += 1 >> + if i - 1 >= -1: >> + b += 1 >> + i += 1 >> + return (a, b) >> + # >> + log = self.run(main, [300], threshold=200) >> + assert log.result == (300, 300) >> + loop, = log.loops_by_filename(self.filepath) >> + assert loop.match(""" >> + i10 = int_lt(i8, i9) >> + guard_true(i10, descr=...) >> + # XXX: why do we need ovf check here? If we put a literal "300" >> + # instead of "n", it disappears > > With n==sys.maxint, the operation i+5 would be the one overflowing. yes, you are right of course. I was just thinking nonsense, but I realized only after I asked the question :-). Of course the ovf check needs to be there because we don't specialize the loop on the value of n. Although it might be cool to be able to do promotion at applevel, for those who really want :-) ciao, Anto From hakan at debian.org Thu Apr 14 16:01:09 2011 From: hakan at debian.org (Hakan Ardo) Date: Thu, 14 Apr 2011 16:01:09 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: port test_intbound_addsub_ge to test_pypy_c_new In-Reply-To: <4DA6F01D.8020505@gmail.com> References: <20110414124402.CD7472A204D@codespeak.net> <4DA6F01D.8020505@gmail.com> Message-ID: On Thu, Apr 14, 2011 at 3:01 PM, Antonio Cuni wrote: > > Of course the ovf check needs to be there because we don't specialize the loop > on the value of n. Although it might be cool to be able to do promotion at > applevel, for those who really want :-) Well, you can actually (sort of): def main(n): i, a = 0, 0 exec """def promote(n): assert n==%d""" % n while i < n: promote(n) a += i+5 i += 1 return a With this I get two extra operations in the loop: i11 = ptr_eq(ConstPtr(ptr10), p7) guard_false(i11, descr=) [p1, p0, p2, p3, p4, i5, i6] but p7 is loop-invariant so they should be easy to get rid of. I don't know why they are not already... -- H?kan Ard? From hakan at debian.org Thu Apr 14 16:44:27 2011 From: hakan at debian.org (Hakan Ardo) Date: Thu, 14 Apr 2011 16:44:27 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: port test_intbound_addsub_ge to test_pypy_c_new In-Reply-To: References: <20110414124402.CD7472A204D@codespeak.net> <4DA6F01D.8020505@gmail.com> Message-ID: Second though, that will recompile on every call, but if we cache the promote functions: def main(n, promoters={}): i, a = 0, 0 if n not in promoters: exec """def promote(n): assert n==%d""" % n promoters[n] = promote else: promote = promoters[n] while i < n: promote(n) a += i+5 i += 1 return a we will actualy get rid of the extra ptr_eq and guard_false too... On Thu, Apr 14, 2011 at 4:01 PM, Hakan Ardo wrote: > On Thu, Apr 14, 2011 at 3:01 PM, Antonio Cuni wrote: >> >> Of course the ovf check needs to be there because we don't specialize the loop >> on the value of n. Although it might be cool to be able to do promotion at >> applevel, for those who really want :-) > > Well, you can actually (sort of): > > ? ?def main(n): > ? ? ? ?i, a = 0, 0 > ? ? ? ?exec """def promote(n): > ? ? ? ? ? ? ? ? ? ?assert n==%d""" % n > ? ? ? ?while i < n: > ? ? ? ? ? ?promote(n) > ? ? ? ? ? ?a += i+5 > ? ? ? ? ? ?i += 1 > ? ? ? ?return a > > With this I get two extra operations in the loop: > > ? ?i11 = ptr_eq(ConstPtr(ptr10), p7) > ? ?guard_false(i11, descr=) [p1, p0, p2, p3, p4, i5, i6] > > but p7 is loop-invariant so they should be easy to get rid of. I don't > know why they are not already... > > -- > H?kan Ard? > -- H?kan Ard? From anto.cuni at gmail.com Thu Apr 14 16:56:03 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Thu, 14 Apr 2011 16:56:03 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: port test_intbound_addsub_ge to test_pypy_c_new In-Reply-To: References: <20110414124402.CD7472A204D@codespeak.net> <4DA6F01D.8020505@gmail.com> Message-ID: <4DA70B03.7090106@gmail.com> On 14/04/11 16:44, Hakan Ardo wrote: > Second though, that will recompile on every call, but if we cache the > promote functions: > > def main(n, promoters={}): > i, a = 0, 0 > if n not in promoters: > exec """def promote(n): > assert n==%d""" % n > promoters[n] = promote > else: > promote = promoters[n] > while i < n: > promote(n) > a += i+5 > i += 1 > return a > > we will actualy get rid of the extra ptr_eq and guard_false too... wow, that's advanced... and without knowing the internals of the JIT, it really looks like black magic. While we are at it and if you have time/feel like, could you please have a look to test_zeropadded and test_circular in test_pypy_c_new? It's not clear to me what they are supposed to check (note that it's fine if you say "they just check that the program works correctly" :-)). From alex.gaynor at gmail.com Thu Apr 14 17:15:13 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 14 Apr 2011 11:15:13 -0400 Subject: [pypy-dev] [pypy-svn] pypy default: port test_intbound_addsub_ge to test_pypy_c_new In-Reply-To: <4DA70B03.7090106@gmail.com> References: <20110414124402.CD7472A204D@codespeak.net> <4DA6F01D.8020505@gmail.com> <4DA70B03.7090106@gmail.com> Message-ID: On Thu, Apr 14, 2011 at 10:56 AM, Antonio Cuni wrote: > On 14/04/11 16:44, Hakan Ardo wrote: > > Second though, that will recompile on every call, but if we cache the > > promote functions: > > > > def main(n, promoters={}): > > i, a = 0, 0 > > if n not in promoters: > > exec """def promote(n): > > assert n==%d""" % n > > promoters[n] = promote > > else: > > promote = promoters[n] > > while i < n: > > promote(n) > > a += i+5 > > i += 1 > > return a > > > > we will actualy get rid of the extra ptr_eq and guard_false too... > > wow, that's advanced... and without knowing the internals of the JIT, it > really looks like black magic. > > While we are at it and if you have time/feel like, could you please have a > look to test_zeropadded and test_circular in test_pypy_c_new? It's not > clear > to me what they are supposed to check (note that it's fine if you say "they > just check that the program works correctly" :-)). > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > The idea of a builtin app level promote is cool, I guess it should be smart and unbox primitives though. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Apr 14 19:47:39 2011 From: holger at merlinux.eu (holger krekel) Date: Thu, 14 Apr 2011 17:47:39 +0000 Subject: [pypy-dev] project infrastructure issues Message-ID: <20110414174739.GX16231@merlinux.eu> Hey all, now that pypy's codespeak subversion usage is basically gone i'd like to push for remaining issues related to the pypy infrastructure: - apache/website - buildbot/master - roundup/issue tracker - mailman/mailing lists pypy-dev/commits/z Which of them shall we try to move elsewhere? My preliminary suggestion: - website -> readthedocs? or other site - buildbot -> python.org? or other site - issue tracker -> bitbucket issue tracker - mailing lists -> google groups or python.org or other site The "other site" could be a host that i anyway need to have for remaining codespeak and merlinux stuff and which thus is somewhat guaranteed to work mail- and otherwise. Other people could get admin access as well, of course. any suggestions or comments? best, holger From fwierzbicki at gmail.com Thu Apr 14 19:20:27 2011 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Thu, 14 Apr 2011 10:20:27 -0700 Subject: [pypy-dev] The JVM backend and Jython In-Reply-To: <4DA349C2.3080403@gmail.com> References: <4DA349C2.3080403@gmail.com> Message-ID: On Mon, Apr 11, 2011 at 11:34 AM, Antonio Cuni wrote: > Hi Frank, > > On 30/03/11 04:40, fwierzbicki at gmail.com wrote: > [cut] >> So to my question - just how broken is the JVM backend? Are there >> workarounds that would allow the Java code to get generated? > > so, now the jvm (and cli) translation works again. You can just type > ./translate.py -b jvm, and the fish the relevant .class/.j files from > /tmp/usession-default-*/pypy. Nice - thanks! -Frank From alex.gaynor at gmail.com Fri Apr 15 03:01:21 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Thu, 14 Apr 2011 21:01:21 -0400 Subject: [pypy-dev] project infrastructure issues In-Reply-To: <20110414174739.GX16231@merlinux.eu> References: <20110414174739.GX16231@merlinux.eu> Message-ID: On Thu, Apr 14, 2011 at 1:47 PM, holger krekel wrote: > Hey all, > > now that pypy's codespeak subversion usage is basically gone i'd like to > push for remaining issues related to the pypy infrastructure: > > - apache/website > - buildbot/master > - roundup/issue tracker > - mailman/mailing lists pypy-dev/commits/z > > Which of them shall we try to move elsewhere? > > My preliminary suggestion: > > - website -> readthedocs? or other site > - buildbot -> python.org? or other site > - issue tracker -> bitbucket issue tracker > - mailing lists -> google groups or python.org or other site > > The "other site" could be a host that i anyway > need to have for remaining codespeak and merlinux stuff > and which thus is somewhat guaranteed to work > mail- and otherwise. Other people could get admin > access as well, of course. > > any suggestions or comments? > > best, > holger > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > readthedocs seems like the right solution for docs, should just be a matter of setting up the post-commit hook and adding a cname for docs.pypy.org Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From anto.cuni at gmail.com Fri Apr 15 09:58:01 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 15 Apr 2011 09:58:01 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: fixed test_circular In-Reply-To: <20110414171414.3B30C2A2043@codespeak.net> References: <20110414171414.3B30C2A2043@codespeak.net> Message-ID: <4DA7FA89.2000908@gmail.com> Hi Hakan, thanks for the commits > + # We want to check that the array bound checks are removed, > + # so it's this part of the trace. However we dont care about > + # the force_token()'s. Can they be ignored? yes, I think they can be just ignored, because AFAIK operations without side effects and whose result is unused, are removed by the backend regalloc. ciao, Anto From anto.cuni at gmail.com Fri Apr 15 10:38:30 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 15 Apr 2011 10:38:30 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: fixed test_circular In-Reply-To: References: <20110414171414.3B30C2A2043@codespeak.net> <4DA7FA89.2000908@gmail.com> Message-ID: > Right. My point was that since we dont care if they are there or not > the test should not test that they are there and fail if they are not. > So if there is an easy way to ignore them in this new test_pypy_c > framework (which is very cool by the way), we should. If it's not easy > I'm fine with keeping the test as it is. My main motivation here is to > learn about the new framework :) ah, I understand now. No, ignoring all force_tokens at once is not possible at the moment, but I agree that it would be a nice feature, I think I'll implement it later. Btw, I fear I need more of your help with test_silly_max and test_iter_max (see 2e5bd737be0c): what do we want to check there? From fijall at gmail.com Fri Apr 15 10:45:08 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 15 Apr 2011 10:45:08 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: fixed test_circular In-Reply-To: References: <20110414171414.3B30C2A2043@codespeak.net> <4DA7FA89.2000908@gmail.com> Message-ID: On Fri, Apr 15, 2011 at 10:38 AM, Antonio Cuni wrote: >> Right. My point was that since we dont care if they are there or not >> the test should not test that they are there and fail if they are not. >> So if there is an easy way to ignore them in this new test_pypy_c >> framework (which is very cool by the way), we should. If it's not easy >> I'm fine with keeping the test as it is. My main motivation here is to >> learn about the new framework :) > > ah, I understand now. > No, ignoring all force_tokens at once is not possible at the moment, > but I agree that it would be a nice feature, I think I'll implement it > later. > > Btw, I fear I need more of your help with test_silly_max and > test_iter_max (see 2e5bd737be0c): what do we want to check there? > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > Note that force_token operation is really cheap in the backend. It also does not use a whole lot of space. From hakan at debian.org Fri Apr 15 10:50:25 2011 From: hakan at debian.org (Hakan Ardo) Date: Fri, 15 Apr 2011 10:50:25 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: fixed test_circular In-Reply-To: References: <20110414171414.3B30C2A2043@codespeak.net> <4DA7FA89.2000908@gmail.com> Message-ID: Hi, the point here is that we want max(a,b) to be turned into a single guard while we dont want max(*range(300)) and max(range(300)) to blow up into 300 guards, since that might lead to 2**300 different traces. I'm not sure how to best test this... On Fri, Apr 15, 2011 at 10:38 AM, Antonio Cuni wrote: >> Right. My point was that since we dont care if they are there or not >> the test should not test that they are there and fail if they are not. >> So if there is an easy way to ignore them in this new test_pypy_c >> framework (which is very cool by the way), we should. If it's not easy >> I'm fine with keeping the test as it is. My main motivation here is to >> learn about the new framework :) > > ah, I understand now. > No, ignoring all force_tokens at once is not possible at the moment, > but I agree that it would be a nice feature, I think I'll implement it > later. > > Btw, I fear I need more of your help with test_silly_max and > test_iter_max (see 2e5bd737be0c): what do we want to check there? > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > -- H?kan Ard? From anto.cuni at gmail.com Fri Apr 15 11:55:45 2011 From: anto.cuni at gmail.com (Antonio Cuni) Date: Fri, 15 Apr 2011 11:55:45 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: fixed test_circular In-Reply-To: References: <20110414171414.3B30C2A2043@codespeak.net> <4DA7FA89.2000908@gmail.com> Message-ID: <4DA81621.2020000@gmail.com> On 15/04/11 10:50, Hakan Ardo wrote: > Hi, > the point here is that we want max(a,b) to be turned into a single > guard while we dont want max(*range(300)) and max(range(300)) to blow > up into 300 guards, since that might lead to 2**300 different traces. > I'm not sure how to best test this... can't we just check that the loop contains a residual call to min_max_loop? From hakan at debian.org Fri Apr 15 13:11:53 2011 From: hakan at debian.org (Hakan Ardo) Date: Fri, 15 Apr 2011 13:11:53 +0200 Subject: [pypy-dev] [pypy-svn] pypy default: fixed test_circular In-Reply-To: <4DA81621.2020000@gmail.com> References: <20110414171414.3B30C2A2043@codespeak.net> <4DA7FA89.2000908@gmail.com> <4DA81621.2020000@gmail.com> Message-ID: OK, I also added a check on the guard count. On Fri, Apr 15, 2011 at 11:55 AM, Antonio Cuni wrote: > On 15/04/11 10:50, Hakan Ardo wrote: >> Hi, >> the point here is that we want max(a,b) to be turned into a single >> guard while we dont want max(*range(300)) and max(range(300)) to blow >> up into 300 guards, since that might lead to 2**300 different traces. >> I'm not sure how to best test this... > > can't we just check that the loop contains a residual call to min_max_loop? > -- H?kan Ard? From holger at merlinux.eu Fri Apr 15 21:42:02 2011 From: holger at merlinux.eu (holger krekel) Date: Fri, 15 Apr 2011 19:42:02 +0000 Subject: [pypy-dev] project infrastructure issues In-Reply-To: References: <20110414174739.GX16231@merlinux.eu> Message-ID: <20110415194202.GZ16231@merlinux.eu> On Thu, Apr 14, 2011 at 21:01 -0400, Alex Gaynor wrote: > On Thu, Apr 14, 2011 at 1:47 PM, holger krekel wrote: > > > Hey all, > > > > now that pypy's codespeak subversion usage is basically gone i'd like to > > push for remaining issues related to the pypy infrastructure: > > > > - apache/website > > - buildbot/master > > - roundup/issue tracker > > - mailman/mailing lists pypy-dev/commits/z > > > > Which of them shall we try to move elsewhere? > > > > My preliminary suggestion: > > > > - website -> readthedocs? or other site > > - buildbot -> python.org? or other site > > - issue tracker -> bitbucket issue tracker > > - mailing lists -> google groups or python.org or other site > > > > The "other site" could be a host that i anyway > > need to have for remaining codespeak and merlinux stuff > > and which thus is somewhat guaranteed to work > > mail- and otherwise. Other people could get admin > > access as well, of course. > > > > any suggestions or comments? > > > > best, > > holger > > _______________________________________________ > > pypy-dev at codespeak.net > > http://codespeak.net/mailman/listinfo/pypy-dev > > > > readthedocs seems like the right solution for docs, should just be a matter > of setting up the post-commit hook and adding a cname for docs.pypy.org That would still leave open the question of pypy.org itself i guess. besides, "make" in pypy/doc spews out a lot of errors and warnings for me. Do you know if anybody is caring for completing the transition to sphinx? holger From fijall at gmail.com Fri Apr 15 22:02:30 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 15 Apr 2011 22:02:30 +0200 Subject: [pypy-dev] project infrastructure issues In-Reply-To: <20110415194202.GZ16231@merlinux.eu> References: <20110414174739.GX16231@merlinux.eu> <20110415194202.GZ16231@merlinux.eu> Message-ID: On Fri, Apr 15, 2011 at 9:42 PM, holger krekel wrote: > On Thu, Apr 14, 2011 at 21:01 -0400, Alex Gaynor wrote: >> On Thu, Apr 14, 2011 at 1:47 PM, holger krekel wrote: >> >> > Hey all, >> > >> > now that pypy's codespeak subversion usage is basically gone i'd like to >> > push for remaining issues related to the pypy infrastructure: >> > >> > - apache/website >> > - buildbot/master >> > - roundup/issue tracker >> > - mailman/mailing lists pypy-dev/commits/z >> > >> > Which of them shall we try to move elsewhere? >> > >> > My preliminary suggestion: >> > >> > - website -> readthedocs? or other site >> > - buildbot -> python.org? or other site >> > - issue tracker -> bitbucket issue tracker >> > - mailing lists -> google groups or python.org or other site >> > >> > The "other site" could be a host that i anyway >> > need to have for remaining codespeak and merlinux stuff >> > and which thus is somewhat guaranteed to work >> > mail- and otherwise. ?Other people could get admin >> > access as well, of course. >> > >> > any suggestions or comments? >> > >> > best, >> > holger >> > _______________________________________________ >> > pypy-dev at codespeak.net >> > http://codespeak.net/mailman/listinfo/pypy-dev >> > >> >> readthedocs seems like the right solution for docs, should just be a matter >> of setting up the post-commit hook and adding a cname for docs.pypy.org > > That would still leave open the question of pypy.org itself i guess. > > besides, "make" in pypy/doc spews out a lot of errors and warnings for me. > Do you know if anybody is caring for completing the transition to sphinx? > > holger I think laura does. From matthew at woodcraft.me.uk Sun Apr 17 22:03:16 2011 From: matthew at woodcraft.me.uk (Matthew Woodcraft) Date: Sun, 17 Apr 2011 21:03:16 +0100 Subject: [pypy-dev] Ignore 'pinsrb/w/d' instructions in trackgcroot Message-ID: <20110417200316.GA22864@golux.woodcraft.me.uk> I found I needed the following patch in order to run translation with gcc 4.6 and -march=corei7. -M- --- a/pypy/translator/c/gcc/trackgcroot.py +++ b/pypy/translator/c/gcc/trackgcroot.py @@ -456,7 +456,7 @@ class FunctionGcRootTracker(object): 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', 'bswap', 'bt', 'rdtsc', - 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', + 'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'pinsr', # zero-extending moves should not produce GC pointers 'movz', ]) From fijall at gmail.com Mon Apr 18 08:07:39 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 18 Apr 2011 08:07:39 +0200 Subject: [pypy-dev] Ignore 'pinsrb/w/d' instructions in trackgcroot In-Reply-To: <20110417200316.GA22864@golux.woodcraft.me.uk> References: <20110417200316.GA22864@golux.woodcraft.me.uk> Message-ID: thanks, commited! On Sun, Apr 17, 2011 at 10:03 PM, Matthew Woodcraft wrote: > I found I needed the following patch in order to run translation with > gcc 4.6 and -march=corei7. > > -M- > > > --- a/pypy/translator/c/gcc/trackgcroot.py > +++ b/pypy/translator/c/gcc/trackgcroot.py > @@ -456,7 +456,7 @@ class FunctionGcRootTracker(object): > ? ? ? ? 'inc', 'dec', 'not', 'neg', 'or', 'and', 'sbb', 'adc', > ? ? ? ? 'shl', 'shr', 'sal', 'sar', 'rol', 'ror', 'mul', 'imul', 'div', 'idiv', > ? ? ? ? 'bswap', 'bt', 'rdtsc', > - ? ? ? ?'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', > + ? ? ? ?'punpck', 'pshufd', 'pcmp', 'pand', 'psllw', 'pslld', 'psllq', 'pinsr', > ? ? ? ? # zero-extending moves should not produce GC pointers > ? ? ? ? 'movz', > ? ? ? ? ]) > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From pypy at pocketnix.org Mon Apr 18 13:44:13 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Mon, 18 Apr 2011 11:44:13 +0000 Subject: [pypy-dev] project infrastructure issues In-Reply-To: <20110414174739.GX16231@merlinux.eu> References: <20110414174739.GX16231@merlinux.eu> Message-ID: <20110418114413.GA17283@pocketnix.org> On Thu, Apr 14, 2011 at 05:47:39PM +0000, holger krekel wrote: > Hey all, > > now that pypy's codespeak subversion usage is basically gone i'd like to > push for remaining issues related to the pypy infrastructure: > > - apache/website > - buildbot/master > - roundup/issue tracker > - mailman/mailing lists pypy-dev/commits/z > > Which of them shall we try to move elsewhere? > > My preliminary suggestion: > > - website -> readthedocs? or other site > - buildbot -> python.org? or other site > - issue tracker -> bitbucket issue tracker > - mailing lists -> google groups or python.org or other site > > The "other site" could be a host that i anyway > need to have for remaining codespeak and merlinux stuff > and which thus is somewhat guaranteed to work > mail- and otherwise. Other people could get admin > access as well, of course. > Hi I dont mind hosting the website and mailing lists and would be more than willing to grab a cheap dedicated box or vps somewhere to do so. i already manage a number of boxes for my own private use and adding another wont be much more work i should be able to provide some build slaves without issue. these would be on a home dsl plan and may be down for a day or two per year. that said i can easily cache the output on a publicly available server. KVM or linux containers are avalible on the host and i was planning to run at least ubuntu and fedora as build slaves in a container with $GB of ram allocated to them (64bit) ssh access would be avalible. at the very least i am willing to put in some admin time for the pypy project. if there is alternate hardware avalible and you just need someone to man them or set them up let me know Questions: * with Apache is this a standard cgi setup or a wsgi or mod_python setup for any dynamic parts of the site * if there are any other oddities or special setups let me know - Da_Blitz From alex.gaynor at gmail.com Mon Apr 18 15:16:18 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 18 Apr 2011 09:16:18 -0400 Subject: [pypy-dev] project infrastructure issues In-Reply-To: <20110418114413.GA17283@pocketnix.org> References: <20110414174739.GX16231@merlinux.eu> <20110418114413.GA17283@pocketnix.org> Message-ID: On Mon, Apr 18, 2011 at 7:44 AM, wrote: > On Thu, Apr 14, 2011 at 05:47:39PM +0000, holger krekel wrote: > > Hey all, > > > > now that pypy's codespeak subversion usage is basically gone i'd like to > > push for remaining issues related to the pypy infrastructure: > > > > - apache/website > > - buildbot/master > > - roundup/issue tracker > > - mailman/mailing lists pypy-dev/commits/z > > > > Which of them shall we try to move elsewhere? > > > > My preliminary suggestion: > > > > - website -> readthedocs? or other site > > - buildbot -> python.org? or other site > > - issue tracker -> bitbucket issue tracker > > - mailing lists -> google groups or python.org or other site > > > > The "other site" could be a host that i anyway > > need to have for remaining codespeak and merlinux stuff > > and which thus is somewhat guaranteed to work > > mail- and otherwise. Other people could get admin > > access as well, of course. > > > > Hi > > I dont mind hosting the website and mailing lists and would be more > than willing to grab a cheap dedicated box or vps somewhere to do so. > i already manage a number of boxes for my own private use and adding > another wont be much more work > > i should be able to provide some build slaves without issue. these > would be on a home dsl plan and may be down for a day or two per year. > that said i can easily cache the output on a publicly available > server. > > KVM or linux containers are avalible on the host and i was planning to > run at least ubuntu and fedora as build slaves in a container with $GB > of ram allocated to them (64bit) ssh access would be avalible. > > at the very least i am willing to put in some admin time for the pypy > project. if there is alternate hardware avalible and you just need > someone to man them or set them up let me know > > Questions: > * with Apache is this a standard cgi setup or a wsgi or mod_python setup > for any dynamic parts of the site > > * if there are any other oddities or special setups let me know > > - Da_Blitz > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > For hosting just the website, I'm sure someone like ep.io would be more than willing to help us out if we asked (hell we probably fall under their free limits). Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From berdario at gmail.com Mon Apr 18 23:14:16 2011 From: berdario at gmail.com (Dario Bertini) Date: Mon, 18 Apr 2011 23:14:16 +0200 Subject: [pypy-dev] Possible sprint in Genova before/after Europython In-Reply-To: <4DA578EA.2080108@gmail.com> References: <4DA578EA.2080108@gmail.com> Message-ID: On 13 April 2011 12:20, Antonio Cuni wrote: > Thus, I ask everybody who is potentially/likely interested in coming to let me > know. > I plan to attend From qbproger at gmail.com Tue Apr 19 00:25:44 2011 From: qbproger at gmail.com (Joe) Date: Mon, 18 Apr 2011 18:25:44 -0400 Subject: [pypy-dev] Problem with large allocation in test Message-ID: I was trying to run the test file: pypy/jit/backend/x86/test/test_rx86_64_auto_encoding.py and was getting the following traceback: http://paste.pocoo.org/show/374129/ If you look at the comment on line 17, it's trying to allocate much more memory than I have. I think it's a total of 21GB, while I only have 4GB. I'm using 64bit OpenSuSE 11.4 for my operating system. I had the kernel setting overcommit_memory set to 0 (which may be part of the problem). Anyway, after I went into ll2ctypes.py and set far_regions to True, I was able to successfully run the original test. I don't think setting far_regions to True is the correct solution to the problem, but fiddling with kernel settings on my system is not ideal either. What would be a better overall solution? If any clarification is needed let me know, Joe From pypy at pocketnix.org Tue Apr 19 00:59:26 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Mon, 18 Apr 2011 22:59:26 +0000 Subject: [pypy-dev] Problem with large allocation in test In-Reply-To: References: Message-ID: <20110418225926.GB17283@pocketnix.org> On Mon, Apr 18, 2011 at 06:25:44PM -0400, Joe wrote: > I was trying to run the test file: > pypy/jit/backend/x86/test/test_rx86_64_auto_encoding.py > > and was getting the following traceback: > http://paste.pocoo.org/show/374129/ > > If you look at the comment on line 17, it's trying to allocate much > more memory than I have. I think it's a total of 21GB, while I only > have 4GB. I'm using 64bit OpenSuSE 11.4 for my operating system. I > had the kernel setting overcommit_memory set to 0 (which may be part > of the problem). > > Anyway, after I went into ll2ctypes.py and set far_regions to True, I > was able to successfully run the original test. I don't think setting > far_regions to True is the correct solution to the problem, but > fiddling with kernel settings on my system is not ideal either. What > would be a better overall solution? > > If any clarification is needed let me know, > Joe your vm.overcommit_ratio should be set to "50" or 50% by default, as per http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=Documentation/vm/overcommit-accounting;hb=HEAD this means that any allocations of 6GB will automatically be rejected as "not sane" and you should receive the ENOMEM error indicating the kernel cannot satisfy the supplied range there are a couple of ways to fix this * don't allocate so much ram (did anyone test this before on a 64bit host on linux) * change the vm overcommit policy to 1 (allow everything, don't perform sanity checks) * change the overcommit ratio to something that will satisfy the allocation (20GB/4GB ~= 5x, so a value of 600% or 600 should do it) * Make the mapping a rmmap.MAP_PRIVATE and rmmap.PROT_READ only, depending on what you are testing this may not be useful as indicated in the linked kernel documentation YYMV, first option is safe, 2nd option you may want to double check the documentation and the values you are passing, the 3rd option is also safe in that it will allow badly behaved apps to run, not prevent apps from running to change either of these values use the following: * to adjust the allocation policy: sysctl vm.overcommit_memory= * to adjust the ratio: sysctl vm.overcommit_ratio= to print the current values (and save them for restoring them after you have done tweaking them: sysctl vm.overcommit_memory or sysctl vm.overcommit_ratio Hope this is whats causing the issue Da_Blitz From ian.overgard at gmail.com Tue Apr 19 03:15:57 2011 From: ian.overgard at gmail.com (Ian Overgard) Date: Mon, 18 Apr 2011 19:15:57 -0600 Subject: [pypy-dev] rlib parsing Message-ID: Hey guys, I was playing around with using the parser generator in rlib, but I'm having some trouble figuring out how to get it to work in rpython (I did get one working in normal python though). Is there a resource on using it somewhere, or maybe just a few pointers? (I realize it's probably in a pretty beta state, but so far it seems like the only parser-generator that's runnable in rpython without really big changes). I was reading this: http://codespeak.net/pypy/dist/pypy/doc/rlib.html#full-example but it seems to cut off rather abruptly. It seems like you do something along the lines of instantiating one in normal python, then asking it for its string representation and generating a source file from that. Is that accurate? Or did I just manage to confuse myself terribly while reading the prolog example? Thanks! Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From santagada at gmail.com Tue Apr 19 04:47:48 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Mon, 18 Apr 2011 23:47:48 -0300 Subject: [pypy-dev] rlib parsing In-Reply-To: References: Message-ID: IIRC you have to parse your grammar definition before your entry point for example in the module level somewhere you do try: t = GFILE.read(mode='U') regexs, rules, ToAST = parse_ebnf(t) except ParseError,e: print e.nice_error_message(filename=str(GFILE),source=t) raise parsef = make_parse_function(regexs, rules, eof=True) then parsef can be used inside your entry point. What I mean is, parse_ebnf and make_parse_function are not RPython, so they need to run before translation take place (remember that the pypy translator runs after import time, and it translates rpython from live python objects in memory). On Mon, Apr 18, 2011 at 10:15 PM, Ian Overgard wrote: > Hey guys, > > I was playing around with using the parser generator in rlib, but I'm having > some trouble figuring out how to get it to work in rpython (I did get one > working in normal python though). Is there a resource on using it somewhere, > or maybe just a few pointers? (I realize it's probably in a pretty beta > state, but so far it seems like the only parser-generator that's runnable in > rpython without really big changes). I was reading this: > http://codespeak.net/pypy/dist/pypy/doc/rlib.html#full-example but it seems > to cut off rather abruptly. > > It seems like you do something along the lines of instantiating one in > normal python, then asking it for its string representation and generating a > source file from that. Is that accurate? Or did I just manage to confuse > myself terribly while reading the prolog example? > > Thanks! > Ian > > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > -- Leonardo Santagada From fijall at gmail.com Tue Apr 19 08:23:47 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 19 Apr 2011 08:23:47 +0200 Subject: [pypy-dev] Problem with large allocation in test In-Reply-To: <20110418225926.GB17283@pocketnix.org> References: <20110418225926.GB17283@pocketnix.org> Message-ID: On Tue, Apr 19, 2011 at 12:59 AM, wrote: > On Mon, Apr 18, 2011 at 06:25:44PM -0400, Joe wrote: >> I was trying to run the test file: >> pypy/jit/backend/x86/test/test_rx86_64_auto_encoding.py >> >> and was getting the following traceback: >> http://paste.pocoo.org/show/374129/ >> >> If you look at the comment on line 17, it's trying to allocate much >> more memory than I have. ?I think it's a total of 21GB, while I only >> have 4GB. ?I'm using 64bit OpenSuSE 11.4 for my operating system. I >> had the kernel setting overcommit_memory set to 0 (which may be part >> of the problem). >> >> Anyway, after I went into ll2ctypes.py and set far_regions to True, I >> was able to successfully run the original test. ?I don't think setting >> far_regions to True is the correct solution to the problem, but >> fiddling with kernel settings on my system is not ideal either. ?What >> would be a better overall solution? >> >> If any clarification is needed let me know, >> Joe > > > your vm.overcommit_ratio should be set to "50" or 50% by default, as > per > http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=Documentation/vm/overcommit-accounting;hb=HEAD > this means that any allocations of 6GB will automatically be rejected > as "not sane" and you should receive the ENOMEM error indicating the > kernel cannot satisfy the supplied range > > there are a couple of ways to fix this > > * don't allocate so much ram (did anyone test this before on a 64bit > host on linux) The reason for this is that we want to test far jumps (exceeding 4G or 2^32 in address space). How can you do it otherwise? > > * change the vm overcommit policy to 1 (allow everything, don't perform > sanity checks) > > * change the overcommit ratio to something that will satisfy the > allocation (20GB/4GB ~= 5x, so a value of 600% or 600 should do it) > > * Make the mapping a rmmap.MAP_PRIVATE and rmmap.PROT_READ only, > depending on what you are testing this may not be useful as indicated > in the linked kernel documentation > > YYMV, first option is safe, 2nd option you may want to double check > the documentation and the values you are passing, the 3rd option is > also safe in that it will allow badly behaved apps to run, not prevent > apps from running > > > to change either of these values use the following: > > * to adjust the allocation policy: > ?sysctl vm.overcommit_memory= > > * to adjust the ratio: > ?sysctl vm.overcommit_ratio= > > to print the current values (and save them for restoring them after > you have done tweaking them: > ?sysctl vm.overcommit_memory > or > ?sysctl vm.overcommit_ratio > > > Hope this is whats causing the issue > Da_Blitz > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > From arigo at tunes.org Tue Apr 19 10:10:40 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 19 Apr 2011 10:10:40 +0200 Subject: [pypy-dev] Problem with large allocation in test In-Reply-To: References: <20110418225926.GB17283@pocketnix.org> Message-ID: Hi Joe, On Tue, Apr 19, 2011 at 8:23 AM, Maciej Fijalkowski wrote: >>> Anyway, after I went into ll2ctypes.py and set far_regions to True This is a correct workaround, indeed. As you found out the problem comes from the fact that your OS reserves actual RAM from mmap() eagerly. As Maciej points out, it's useful to test e.g. jumps over more than +/-2GB. But the same result could potentially be obtained in a different way: instead of allocating a single block of 20GB, we could mmap 10 blocks of (say) 20MB for the test, but taking care of placing the blocks so that they start 2GB apart from each other. This could be done using pypy.rlib.rmmap.alloc() or a variant. A bient?t, Armin. From vincent.legoll at gmail.com Tue Apr 19 14:02:43 2011 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Tue, 19 Apr 2011 14:02:43 +0200 Subject: [pypy-dev] Problem with large allocation in test In-Reply-To: References: <20110418225926.GB17283@pocketnix.org> Message-ID: Hello On Tue, Apr 19, 2011 at 10:10 AM, Armin Rigo wrote: > This is a correct workaround, indeed. ?As you found out the problem > comes from the fact that your OS reserves actual RAM from mmap() > eagerly. ?As Maciej points out, it's useful to test e.g. jumps over > more than +/-2GB. ?But the same result could potentially be obtained > in a different way: instead of allocating a single block of 20GB, we > could mmap 10 blocks of (say) 20MB for the test, but taking care of > placing the blocks so that they start 2GB apart from each other. ?This > could be done using pypy.rlib.rmmap.alloc() or a variant. Wouldn't a big read only mmap from /dev/zero before the to be jumped adress allow for not really trying to allocate the big chunk of RAM ? That may be a good fix, still allowing the test for far jumps... -- Vincent Legoll From ian.overgard at gmail.com Wed Apr 20 00:51:00 2011 From: ian.overgard at gmail.com (Ian Overgard) Date: Tue, 19 Apr 2011 16:51:00 -0600 Subject: [pypy-dev] rlib parsing In-Reply-To: References: Message-ID: Thanks, that definitely helped. I forgot you could run arbitrary python before the entry point. I've got it parsing now, but the one issue I'm still running into is that the syntax tree that comes back has a lot of junk nodes. The ToAST.transform function will clean them up, but it seems to be not-rpython, and I don't think there's any way I can call it before the entry point (since it's processing the ast, not the grammar). Is that class just hopeless? Or is there some way I can annotate it myself in the code? ( On Mon, Apr 18, 2011 at 8:47 PM, Leonardo Santagada wrote: > IIRC you have to parse your grammar definition before your entry point > > for example in the module level somewhere you do > > try: > t = GFILE.read(mode='U') > regexs, rules, ToAST = parse_ebnf(t) > except ParseError,e: > print e.nice_error_message(filename=str(GFILE),source=t) > raise > > parsef = make_parse_function(regexs, rules, eof=True) > > then parsef can be used inside your entry point. What I mean is, > parse_ebnf and make_parse_function are not RPython, so they need to > run before translation take place (remember that the pypy translator > runs after import time, and it translates rpython from live python > objects in memory). > > On Mon, Apr 18, 2011 at 10:15 PM, Ian Overgard > wrote: > > Hey guys, > > > > I was playing around with using the parser generator in rlib, but I'm > having > > some trouble figuring out how to get it to work in rpython (I did get one > > working in normal python though). Is there a resource on using it > somewhere, > > or maybe just a few pointers? (I realize it's probably in a pretty beta > > state, but so far it seems like the only parser-generator that's runnable > in > > rpython without really big changes). I was reading this: > > http://codespeak.net/pypy/dist/pypy/doc/rlib.html#full-example but it > seems > > to cut off rather abruptly. > > > > It seems like you do something along the lines of instantiating one in > > normal python, then asking it for its string representation and > generating a > > source file from that. Is that accurate? Or did I just manage to confuse > > myself terribly while reading the prolog example? > > > > Thanks! > > Ian > > > > _______________________________________________ > > pypy-dev at codespeak.net > > http://codespeak.net/mailman/listinfo/pypy-dev > > > > > > -- > Leonardo Santagada > -------------- next part -------------- An HTML attachment was scrubbed... URL: From santagada at gmail.com Wed Apr 20 07:25:34 2011 From: santagada at gmail.com (Leonardo Santagada) Date: Wed, 20 Apr 2011 02:25:34 -0300 Subject: [pypy-dev] rlib parsing In-Reply-To: References: Message-ID: Sorry I forgot to reply to the list. The first step would be to tell us the error you are getting. But the fastest way to solve your problems, and get help learning how to debug pypy is to join the pypy channel at freenode (IRC) On Tue, Apr 19, 2011 at 8:54 PM, Ian Overgard wrote: > That's what I thought too (my code is exactly like that), but something in > it is causing the translator to break, and the error I get back doesn't seem > to tell me anything I can work with. > > I wouldn't mind jumping in to help fix it, is there maybe some sort of > guide/general tips on how to go about debugging translation issues? So far > I've just been looking at the exceptions and the rpython docs and guessing > it out, but I'm guessing people that have worked with this longer might have > more robust ideas on ways to trace things? > > On Tue, Apr 19, 2011 at 5:24 PM, Leonardo Santagada > wrote: >> >> This code is rpython >> >> t = parsef(code) >> ToAST().transform(t) >> >> Maybe you can look at the tests in the library or the prolog and js >> implementation, both use parsing I think (at least js does). I really >> think that parsing could be polished to be a great tool for people >> wanting to implement a language in pypy. >> >> On Tue, Apr 19, 2011 at 7:51 PM, Ian Overgard >> wrote: >> > Thanks, that definitely helped. I forgot you could run arbitrary python >> > before the entry point. >> > >> > I've got it parsing now, but the one issue I'm still running into is >> > that >> > the syntax tree that comes back has a lot of junk nodes. The >> > ToAST.transform >> > function will clean them up, but it seems to be not-rpython, and I don't >> > think there's any way I can call it before the entry point (since it's >> > processing the ast, not the grammar). >> > >> > Is that class just hopeless? Or is there some way I can annotate it >> > myself >> > in the code? ( >> >> -- >> Leonardo Santagada > > -- Leonardo Santagada From cfbolz at gmx.de Wed Apr 20 11:33:02 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 20 Apr 2011 11:33:02 +0200 Subject: [pypy-dev] rlib parsing In-Reply-To: References: Message-ID: <4DAEA84E.5020904@gmx.de> On 04/20/2011 12:51 AM, Ian Overgard wrote: > Thanks, that definitely helped. I forgot you could run arbitrary python > before the entry point. > > I've got it parsing now, but the one issue I'm still running into is > that the syntax tree that comes back has a lot of junk nodes. The > ToAST.transform function will clean them up, but it seems to be > not-rpython, and I don't think there's any way I can call it before the > entry point (since it's processing the ast, not the grammar). > > Is that class just hopeless? Or is there some way I can annotate it > myself in the code? ( You could take a look at the test test_translate_ast_visitor in rlib/parsing/test/test_translate.py. It does what you need to do by explicitly calling visit_[initial rule] on the AST visitor. I tried to replace that with a call to transform and it still worked. So I don't know what you are doing differently from that test. Carl Friedrich From arigo at tunes.org Thu Apr 21 12:21:48 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 21 Apr 2011 12:21:48 +0200 Subject: [pypy-dev] Problem with large allocation in test In-Reply-To: References: <20110418225926.GB17283@pocketnix.org> Message-ID: Hi Vincent, On Tue, Apr 19, 2011 at 2:02 PM, Vincent Legoll wrote: > Wouldn't a big read only mmap from /dev/zero before the to be jumped > adress allow for not really trying to allocate the big chunk of RAM ? > > That may be a good fix, still allowing the test for far jumps... It's not needed; mmap() can optionally take a position argument, allowing us to choose the position of the small pieces of requested memory. A bient?t, Armin. From pypy at pocketnix.org Thu Apr 21 12:33:58 2011 From: pypy at pocketnix.org (pypy at pocketnix.org) Date: Thu, 21 Apr 2011 10:33:58 +0000 Subject: [pypy-dev] Problem with large allocation in test In-Reply-To: References: <20110418225926.GB17283@pocketnix.org> Message-ID: <20110421103358.GA25510@pocketnix.org> On Thu, Apr 21, 2011 at 12:21:48PM +0200, Armin Rigo wrote: > Hi Vincent, > > On Tue, Apr 19, 2011 at 2:02 PM, Vincent Legoll > wrote: > > Wouldn't a big read only mmap from /dev/zero before the to be jumped > > adress allow for not really trying to allocate the big chunk of RAM ? > > > > That may be a good fix, still allowing the test for far jumps... > > It's not needed; mmap() can optionally take a position argument, > allowing us to choose the position of the small pieces of requested > memory. i have been working on a patch based around this behavior of mmap, currently mmap as called in this test does not allow you to specify the base_addr hint however there is an alloc function as used by the jit that allows you to specify this address indirectly (by setting hint.pos in the same class). it appears only the jit is calling this function and as such i have been working on a patch that i hope to have ready in the next couple of days. i wouldn't mind some feedback on how i am trying to do it i am cloning the alloc function and adding a new parameter to specify a base addr, then ripping out the hint object as i have been unable to find anything that references it directly (with a quick grep, will test to be sure) after there is done i will migrate the test from mmap to the new alloc function and provide hints that satisfy the spacing requirements one this is all working i had intended to dump the Alic function and replace it with the new alloc function and update the callers, from my grepping of the source it appears like it is called once or twice in the jit and should be easy to update i am assuming this is the correct course of action, if i should instead not be replacing the original alloc function and just create a new alloc_with_hint function and keep/use both let me know this has been an intresting way to get fammilar with pypy From arigo at tunes.org Thu Apr 21 13:28:31 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 21 Apr 2011 13:28:31 +0200 Subject: [pypy-dev] Problem with large allocation in test In-Reply-To: <20110421103358.GA25510@pocketnix.org> References: <20110418225926.GB17283@pocketnix.org> <20110421103358.GA25510@pocketnix.org> Message-ID: Hi, On Thu, Apr 21, 2011 at 12:33 PM, wrote: > i am assuming this is the correct course of action, if i should > instead not be replacing the original alloc function and just create > a new alloc_with_hint function and keep/use both let me know Indeed, you should keep the original function too: the hint parameter is not used anywhere else in the source code, but it's convenient for debugging because it means that the JIT-generated code will be allocated at a known address, instead of randomly. That's this function's purpose. I'm fine if you create another function. Or if you prefer, you can change the existing function to take an optional base_addr argument. This arguments would default to NULL, and if it is NULL then the function would use the hintp as it does now. A bient?t, Armin. From ian.overgard at gmail.com Thu Apr 21 17:31:42 2011 From: ian.overgard at gmail.com (Ian Overgard) Date: Thu, 21 Apr 2011 09:31:42 -0600 Subject: [pypy-dev] rlib parsing In-Reply-To: <4DAEA84E.5020904@gmx.de> References: <4DAEA84E.5020904@gmx.de> Message-ID: Doh, it turns out you're right, it is valid rpython, the problem was elsewhere. I was assuming the transform function was at fault because the error only happened when that was included in the build. It turns out correlation doesn't equal causation :-) The actual bug was this: [translation:ERROR] Exception': found an operation that always raises AttributeError: generated by a constant operation: getattr(, 'config') Which I managed to fix by putting this lovely hack at the top of my script: class FixConfig: class option: view = False py.test.config = FixConfig Everything seems to be working now. Thanks to both Carl and Leonardo for the help! On Wed, Apr 20, 2011 at 3:33 AM, Carl Friedrich Bolz wrote: > On 04/20/2011 12:51 AM, Ian Overgard wrote: > > Thanks, that definitely helped. I forgot you could run arbitrary python > > before the entry point. > > > > I've got it parsing now, but the one issue I'm still running into is > > that the syntax tree that comes back has a lot of junk nodes. The > > ToAST.transform function will clean them up, but it seems to be > > not-rpython, and I don't think there's any way I can call it before the > > entry point (since it's processing the ast, not the grammar). > > > > Is that class just hopeless? Or is there some way I can annotate it > > myself in the code? ( > > You could take a look at the test test_translate_ast_visitor in > rlib/parsing/test/test_translate.py. It does what you need to do by > explicitly calling visit_[initial rule] on the AST visitor. I tried to > replace that with a call to transform and it still worked. So I don't > know what you are doing differently from that test. > > Carl Friedrich > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From exarkun at twistedmatrix.com Sat Apr 23 01:14:22 2011 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Fri, 22 Apr 2011 23:14:22 -0000 Subject: [pypy-dev] improved cpyext PyArg_ParseTuple s* and t# support Message-ID: <20110422231422.1992.1252208249.divmod.xquotient.920@localhost.localdomain> Hello, There's a pyarg-parsebuffer-new branch in the pypy bitbucket repository that improves PyArg_ParseTuple. It doesn't completely implement either of these, but it adds enough for pyOpenSSL to mostly work. Arbitrary new-style buffer support was too hard for me, so I skipped that. Still, it would be nice to have this branch in 1.5 if possible, since I think with it along with my pyOpenSSL branch, pyOpenSSL becomes mostly functional on PyPy. Can someone take a look and let me know if it's okay to merge to default? Thanks, Jean-Paul From bhartsho at yahoo.com Mon Apr 25 09:45:24 2011 From: bhartsho at yahoo.com (Hart's Antler) Date: Mon, 25 Apr 2011 00:45:24 -0700 (PDT) Subject: [pypy-dev] stackless over ctypes - threading Message-ID: <177297.9081.qm@web114019.mail.gq1.yahoo.com> Hi PyPy Devs, I have an example of using a compiled Rpython extension module in Python using stackless (rcoroutines). From ctypes i can safely read the values of a shared-object (shared between CPython and RPython), but i can not in a thread-safe way set values on the shared object so that RPython can see them. I was wondering what is the best way to handle the thread locking. http://pyppet.blogspot.com/2011/04/stackless-compiled-modules.html thanks, -brett From arigo at tunes.org Mon Apr 25 11:07:55 2011 From: arigo at tunes.org (Armin Rigo) Date: Mon, 25 Apr 2011 11:07:55 +0200 Subject: [pypy-dev] Release 1.5 preparation Message-ID: Hi all, We are starting to prepare the release PyPy 1.5. Note that we are doing it in the "default" branch of mercurial, because it's more convenient to run tests (notably codespeed's). So please, *any* development that you don't explicitly intend to be included in the 1.5 release must go to the branch "post-release-1.5", which we will merge back with "default" after the release is done. A bient?t, Armin. From cfbolz at gmx.de Mon Apr 25 14:31:19 2011 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Mon, 25 Apr 2011 14:31:19 +0200 Subject: [pypy-dev] state of std library Message-ID: Hi all, anybody has an idea what the state of PyPy's standard library is? I assume it's at the state of CPython 2.7, right? Then there is the branch merge-stdlib, which seems to merge in the 2.7.1 changes. Why are all the modified directories deleted there? Carl Friedrich From alex.gaynor at gmail.com Mon Apr 25 15:18:12 2011 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Mon, 25 Apr 2011 09:18:12 -0400 Subject: [pypy-dev] state of std library In-Reply-To: References: Message-ID: On Mon, Apr 25, 2011 at 8:31 AM, Carl Friedrich Bolz wrote: > Hi all, > > anybody has an idea what the state of PyPy's standard library is? I > assume it's at the state of CPython 2.7, right? > > Then there is the branch merge-stdlib, which seems to merge in the > 2.7.1 changes. Why are all the modified directories deleted there? > > Carl Friedrich > _______________________________________________ > pypy-dev at codespeak.net > http://codespeak.net/mailman/listinfo/pypy-dev > merge-stdlib was a branch Carl Meyer worked on at PyCon. The idea was we kept a single stdlib dir, modified as necessary in that dir, and just used hg to merge up with newer versions of CPython, rather than keeping a -modified dir around. "merge" therefore refers to merging -modified and no -modified, not merging with upstream. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimaqq at gmail.com Mon Apr 25 21:53:36 2011 From: dimaqq at gmail.com (Dima Tisnek) Date: Mon, 25 Apr 2011 12:53:36 -0700 Subject: [pypy-dev] cpyext reference counting and other gc's In-Reply-To: References: Message-ID: Apologies that it took a little long Here is the doc describing the idea and its side effects https://docs.google.com/document/d/1k7t-WIsfKW4tIL9i8-7Y6_9lo18wcsibyDONOF2i_l8/edit?hl=en Since many of you are now busy with release, I'm only asking for quick feedback, especially if I missed something obvious. Thanks, Dima Tisnek On 26 March 2011 10:03, Dima Tisnek wrote: > I have an alternative idea in mind > > I'll write up a doc, stick it on github and share with you guys in a > couple of days > > thanks for a clear answer, I just couldn't figure that out form code easily :P > > d. > > On 26 March 2011 01:33, Amaury Forgeot d'Arc wrote: >> Hi, >> >> 2011/3/26 Dima Tisnek : >>> Hey, I had a look at cpyext recently and saw that reference counting >>> is emulated with, err, reference counting, seeminlgy in the referenced >>> object itself. >>> >>> Does this mean that cpyext would not work with other gc's or is there >>> some wrapping going on behind the scenes? >> >> Cpyext works with all pypy gc's. The PyObject* exposed to C code is actually >> a proxy to the "real" interpreter object; a dict lookup is necessary each time a >> reference crosses the C/pypy boundary. Yes, this is slow. >> >> This is implemented in pypy/module/cpyext/pyobject.py; the main functions are >> create_ref() and from_ref(). >> >> -- >> Amaury Forgeot d'Arc >> > From arigo at tunes.org Tue Apr 26 18:23:30 2011 From: arigo at tunes.org (Armin Rigo) Date: Tue, 26 Apr 2011 18:23:30 +0200 Subject: [pypy-dev] cpyext reference counting and other gc's In-Reply-To: References: Message-ID: Hi Dima, On Mon, Apr 25, 2011 at 9:53 PM, Dima Tisnek wrote: > https://docs.google.com/document/d/1k7t-WIsfKW4tIL9i8-7Y6_9lo18wcsibyDONOF2i_l8/edit?hl=en Can you explain a bit more what are the advantages of the solution you propose, compared to what is already implemented in cpyext? Your description is far too high-level for us to know exactly what you mean. It seems that you want to replace the currently implemented solution with a very different one. I can explain it in a bit more details, but I would first like to hear what goal you are trying to achieve. Here is a quick reply based on guessing. The issue with your version is that Py_INCREF() and Py_DECREF() needs to do a slow dictionary lookup, while ours doesn't. Conversely, I believe that your version doesn't need a dictionary lookup in other cases where ours needs to. However it seems to me that if you add so much overhead to Py_INCREF() and Py_DECREF(), you loose all other speed advantages. A bient?t, Armin. From qbproger at gmail.com Tue Apr 26 23:51:06 2011 From: qbproger at gmail.com (Joe) Date: Tue, 26 Apr 2011 17:51:06 -0400 Subject: [pypy-dev] PyPy Assembler SQRT Patch Message-ID: Attached is a patch to allow pypy to use SQRTSD rather than calling out to libc. This resulted in a 2x speedup, that scaled as the benchmark took longer. When i added another 0 to the end of the benchmark it was still a 2x speedup. benchmark results: http://paste.pocoo.org/show/378122/ I'll be happy to answer any questions about the patch. Joe -------------- next part -------------- A non-text attachment was scrubbed... Name: sqrtsd.patch Type: application/octet-stream Size: 14448 bytes Desc: not available URL: From bhartsho at yahoo.com Thu Apr 28 02:27:37 2011 From: bhartsho at yahoo.com (Hart's Antler) Date: Wed, 27 Apr 2011 17:27:37 -0700 (PDT) Subject: [pypy-dev] Hybrid gc and stackless not threadsafe Message-ID: <530338.23195.qm@web114020.mail.gq1.yahoo.com> I am using a stackless compiled rpython module in python over ctypes. To make things threadsafe im using named semaphores. The problem is stackless wont work with the refcounting gc that is threadsafe. Sent from Yahoo! Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Thu Apr 28 15:34:15 2011 From: arigo at tunes.org (Armin Rigo) Date: Thu, 28 Apr 2011 15:34:15 +0200 Subject: [pypy-dev] stackless over ctypes - threading In-Reply-To: <177297.9081.qm@web114019.mail.gq1.yahoo.com> References: <177297.9081.qm@web114019.mail.gq1.yahoo.com> Message-ID: Hi Brett, On Mon, Apr 25, 2011 at 9:45 AM, Hart's Antler wrote: > I have an example of using a compiled Rpython extension module in Python using stackless (rcoroutines). ?From ctypes i can safely read the values of a shared-object (shared between CPython and RPython), but i can not in a thread-safe way set values on the shared object so that RPython can see them. ?I was wondering what is the best way to handle the thread locking. Sorry, no clue. We don't know exactly what you are doing, but I can tell you that poking around with CPython's ctypes in the GC objects of a translated PyPy will just get you randomness. It might work a bit by chance with today's version of PyPy (e.g. as long as there are not multithreading issues, as long as you are on 32-bits and not 64-bits or vice-versa, as long as you use only the default GC options, etc.). It's not something that we want to support. A bient?t, Armin. From dasdasich at googlemail.com Thu Apr 28 20:55:19 2011 From: dasdasich at googlemail.com (DasIch) Date: Thu, 28 Apr 2011 20:55:19 +0200 Subject: [pypy-dev] Proposal for a common benchmark suite Message-ID: Hello, As announced in my GSoC proposal I'd like to announce which benchmarks I'll use for the benchmark suite I will work on this summer. As of now there are two benchmark suites (that I know of) which receive some sort of attention, those are the ones developed as part of the PyPy project[1] which is used for http://speed.pypy.org and the one initially developed for Unladen Swallow which has been continued by CPython[2]. The PyPy benchmarks contain a lot of interesting benchmarks some explicitly developed for that suite, the CPython benchmarks have an extensive set of microbenchmarks in the pybench package as well as the previously mentioned modifications made to the Unladen Swallow benchmarks. I'd like to "simply" merge both suites so that no changes are lost. However I'd like to leave out the waf benchmark which is part of the PyPy suite, the removal was proposed on pypy-dev for obvious deficits[3]. It will be easier to add a better benchmark later than replacing it at a later point. Unless there is a major issue with this plan I'd like to go forward with this. .. [1]: https://bitbucket.org/pypy/benchmarks .. [2]: http://hg.python.org/benchmarks .. [3]: http://mailrepository.com/pypy-dev.codespeak.net/msg/3627509/ From arigo at tunes.org Sat Apr 30 17:04:41 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 30 Apr 2011 17:04:41 +0200 Subject: [pypy-dev] PyPy 1.5 released Message-ID: ====================== PyPy 1.5: Catching Up ====================== We're pleased to announce the 1.5 release of PyPy. This release updates PyPy with the features of CPython 2.7.1, including the standard library. Thus all the features of `CPython 2.6`_ and `CPython 2.7`_ are now supported. It also contains additional performance improvements. You can download it here: http://pypy.org/download.html What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7.1. It's fast (`pypy 1.5 and cpython 2.6.2`_ performance comparison) due to its integrated tracing JIT compiler. This release includes the features of CPython 2.6 and 2.7. It also includes a large number of small improvements to the tracing JIT compiler. It supports Intel machines running Linux 32/64 or Mac OS X. Windows is beta (it roughly works but a lot of small issues have not been fixed so far). Windows 64 is not yet supported. Numerous speed achievements are described on `our blog`_. Normalized speed charts comparing `pypy 1.5 and pypy 1.4`_ as well as `pypy 1.5 and cpython 2.6.2`_ are available on our benchmark website. The speed improvement over 1.4 seems to be around 25% on average. More highlights =============== - The largest change in PyPy's tracing JIT is adding support for `loop invariant code motion`_, which was mostly done by H?kan Ard?. This feature improves the performance of tight loops doing numerical calculations. - The CPython extension module API has been improved and now supports many more extensions. For information on which one are supported, please refer to our `compatibility wiki`_. - These changes make it possible to support `Tkinter and IDLE`_. - The `cProfile`_ profiler is now working with the JIT. However, it skews the performance in unstudied ways. Therefore it is not yet usable to analyze subtle performance problems (the same is true for CPython of course). - There is an `external fork`_ which includes an RPython version of the ``postgresql``. However, there are no prebuilt binaries for this. - Our developer documentation was moved to Sphinx and cleaned up. (click 'Dev Site' on http://pypy.org/ .) - and many small things :-) Cheers, Carl Friedrich Bolz, Laura Creighton, Antonio Cuni, Maciej Fijalkowski, Amaury Forgeot d'Arc, Alex Gaynor, Armin Rigo and the PyPy team .. _`CPython 2.6`: http://docs.python.org/dev/whatsnew/2.6.html .. _`CPython 2.7`: http://docs.python.org/dev/whatsnew/2.7.html .. _`our blog`: http://morepypy.blogspot.com .. _`pypy 1.5 and pypy 1.4`: http://bit.ly/joPhHo .. _`pypy 1.5 and cpython 2.6.2`: http://bit.ly/mbVWwJ .. _`loop invariant code motion`: http://morepypy.blogspot.com/2011/01/loop-invariant-code-motion.html .. _`compatibility wiki`: https://bitbucket.org/pypy/compatibility/wiki/Home .. _`Tkinter and IDLE`: http://morepypy.blogspot.com/2011/04/using-tkinter-and-idle-with-pypy.html .. _`cProfile`: http://docs.python.org/library/profile.html .. _`external fork`: https://bitbucket.org/alex_gaynor/pypy-postgresql From massimo.sala.71 at gmail.com Sat Apr 30 17:38:19 2011 From: massimo.sala.71 at gmail.com (Massimo Sala) Date: Sat, 30 Apr 2011 17:38:19 +0200 Subject: [pypy-dev] PyPy and memory Message-ID: What about PyPy vs plain CPython ? Please see : http://pushingtheweb.com/2010/06/python-and-tcmalloc/ Is it possible for the maintainers to provide - PyPy - PyPy with tcmalloc so end-users can - give a try - do some tests in their different configs - provide a feedback here on the mailing list ? ciao, Massimo From arigo at tunes.org Sat Apr 30 17:54:31 2011 From: arigo at tunes.org (Armin Rigo) Date: Sat, 30 Apr 2011 17:54:31 +0200 Subject: [pypy-dev] PyPy and memory In-Reply-To: References: Message-ID: Hi Massimo, On Sat, Apr 30, 2011 at 5:38 PM, Massimo Sala wrote: > Is it possible for the maintainers to provide > - PyPy > - PyPy with tcmalloc Feel free to try. You need to get pypy, translate it (cd pypy/translator/goal; ./translate.py -Ojit), and you get the C sources in /tmp/usession-yourname/testing_1. Then you can try to add a #define to rename all malloc() to tcmalloc(), or however it is called. PyPy does not use malloc() to allocate its own objects, but it still uses malloc() to get arenas, so in this point of view it is similar to CPython. A bient?t, Armin.