From brett at python.org Sat Sep 1 19:21:36 2012 From: brett at python.org (Brett Cannon) Date: Sat, 1 Sep 2012 13:21:36 -0400 Subject: [Speed] Are benchmarks and libraries mutable? Message-ID: Now that I can run benchmarks against Python 2.7 and 3.3 simultaneously, I'm ready to start updating the benchmarks. This involves two parts. One is moving benchmarks from PyPy over to the unladen repo on hg.python.org/benchmarks. But I wanted to first make sure people don't view the benchmarks as immutable (e.g. as Octane does: https://developers.google.com/octane/faq). Since the benchmarks are always relative between two interpreters their immutability isn't critical compared to if we were to report some overall score. But it also means that any changes made would throw off historical comparisons. For instance, if I take PyPy's Mako benchmark (which does a lot more work), should it be named mako_v2, or should we just replace mako wholesale? And the second is the same question for libraries. For instance, the unladen benchmarks have Django 1.1a0 as the version which is rather ancient. And with 1.5 coming out with provisional Python 3 support I obviously would like to update it. But the same questions as with benchmarks crops up in reference to immutability. Another thing is that 2to3 can't actually be ported using 2to3 (http://bugs.python.org/issue15834) and so that itself will require two versions -- a 2.x version (probably from Python 2.7's stdlib) and a 3.x version (from the 3.2 stdlib) -- which already starts to add interesting issues for me in terms of comparing performance (e.g. I will have to probably update the 2.7 code to use io.BytesIO instead of StringIO.StringIO to be on more equal footing). Similar thing goes for html5lib which has developed its Python 3 support separately from its Python 2 code. If we can't find a reasonable way to handle all of this then what I will do is branch the unladen benchmarks for 2.x/3.x benchmarking, and then create another branch of the benchmark suite to just be for Python 3.x so that we can start fresh with a new set of benchmarks that will never change themselves for benchmarking Python 3 itself. That would also mean we could start of with whatever is needed from PyPy and unladen to have the optimal benchmark runner for speed.python.org. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Sep 1 20:54:16 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 1 Sep 2012 20:54:16 +0200 Subject: [Speed] Are benchmarks and libraries mutable? In-Reply-To: References: Message-ID: On Sat, Sep 1, 2012 at 7:21 PM, Brett Cannon wrote: > Now that I can run benchmarks against Python 2.7 and 3.3 simultaneously, I'm > ready to start updating the benchmarks. This involves two parts. > > One is moving benchmarks from PyPy over to the unladen repo on > hg.python.org/benchmarks. But I wanted to first make sure people don't view > the benchmarks as immutable (e.g. as Octane does: > https://developers.google.com/octane/faq). Since the benchmarks are always > relative between two interpreters their immutability isn't critical compared > to if we were to report some overall score. But it also means that any > changes made would throw off historical comparisons. For instance, if I take > PyPy's Mako benchmark (which does a lot more work), should it be named > mako_v2, or should we just replace mako wholesale? > > And the second is the same question for libraries. For instance, the unladen > benchmarks have Django 1.1a0 as the version which is rather ancient. And > with 1.5 coming out with provisional Python 3 support I obviously would like > to update it. But the same questions as with benchmarks crops up in > reference to immutability. Another thing is that 2to3 can't actually be > ported using 2to3 (http://bugs.python.org/issue15834) and so that itself > will require two versions -- a 2.x version (probably from Python 2.7's > stdlib) and a 3.x version (from the 3.2 stdlib) -- which already starts to > add interesting issues for me in terms of comparing performance (e.g. I will > have to probably update the 2.7 code to use io.BytesIO instead of > StringIO.StringIO to be on more equal footing). Similar thing goes for > html5lib which has developed its Python 3 support separately from its Python > 2 code. > > If we can't find a reasonable way to handle all of this then what I will do > is branch the unladen benchmarks for 2.x/3.x benchmarking, and then create > another branch of the benchmark suite to just be for Python 3.x so that we > can start fresh with a new set of benchmarks that will never change > themselves for benchmarking Python 3 itself. That would also mean we could > start of with whatever is needed from PyPy and unladen to have the optimal > benchmark runner for speed.python.org. > > _______________________________________________ > Speed mailing list > Speed at python.org > http://mail.python.org/mailman/listinfo/speed > Ideally I would like benchmarks to be immutable (have _v is fine). However, updating libraries might not be immutable (after all, you're interested in *the speed of running django*), but maybe we should mark this somehow in the history so we don't compare apples to organges. Cheers, fijal From solipsis at pitrou.net Sat Sep 1 20:57:16 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 1 Sep 2012 20:57:16 +0200 Subject: [Speed] Are benchmarks and libraries mutable? References: Message-ID: <20120901205716.2d3e9e20@pitrou.net> On Sat, 1 Sep 2012 13:21:36 -0400 Brett Cannon wrote: > > One is moving benchmarks from PyPy over to the unladen repo on > hg.python.org/benchmarks. But I wanted to first make sure people don't view > the benchmarks as immutable (e.g. as Octane does: > https://developers.google.com/octane/faq). Since the benchmarks are always > relative between two interpreters their immutability isn't critical > compared to if we were to report some overall score. But it also means that > any changes made would throw off historical comparisons. For instance, if I > take PyPy's Mako benchmark (which does a lot more work), should it be named > mako_v2, or should we just replace mako wholesale? mako_v2 sounds fine to me. Mutating benchmarks makes things confusing: one person may report that interpreter A is faster than interpreter B on a given benchmark, and another person retort that no, interpreter B is faster than interpreter A. Besides, if you want to have useful timelines on speed.p.o, you definitely need stable benchmarks. > And the second is the same question for libraries. For instance, the > unladen benchmarks have Django 1.1a0 as the version which is rather > ancient. And with 1.5 coming out with provisional Python 3 support I > obviously would like to update it. But the same questions as with > benchmarks crops up in reference to immutability. django_v2 sounds fine too :) > (e.g. I will have to probably update the 2.7 code to use > io.BytesIO instead of StringIO.StringIO to be on more equal footing). I disagree. If io.BytesIO is faster than StringIO.StringIO then it's normal for the benchmark results to reflect that (ditto if it's slower). > If we can't find a reasonable way to handle all of this then what I will do > is branch the unladen benchmarks for 2.x/3.x benchmarking, and then create > another branch of the benchmark suite to just be for Python 3.x so that we > can start fresh with a new set of benchmarks that will never change > themselves for benchmarking Python 3 itself. Why not simply add Python 3-specific benchmarks to the mix? You can then create a "py3" benchmark suite in perf.py (and perhaps also a "py2" one). Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From greg at krypto.org Sat Sep 1 21:45:14 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 1 Sep 2012 12:45:14 -0700 Subject: [Speed] Are benchmarks and libraries mutable? In-Reply-To: References: Message-ID: On Sat, Sep 1, 2012 at 10:21 AM, Brett Cannon wrote: > Now that I can run benchmarks against Python 2.7 and 3.3 simultaneously, > I'm ready to start updating the benchmarks. This involves two parts. > > One is moving benchmarks from PyPy over to the unladen repo on > hg.python.org/benchmarks. But I wanted to first make sure people don't > view the benchmarks as immutable (e.g. as Octane does: > https://developers.google.com/octane/faq). Since the benchmarks are > always relative between two interpreters their immutability isn't critical > compared to if we were to report some overall score. But it also means that > any changes made would throw off historical comparisons. For instance, if I > take PyPy's Mako benchmark (which does a lot more work), should it be named > mako_v2, or should we just replace mako wholesale? > I dislike benchmark immutability. The rest of the world including your local computing environment where benchmarks run continues to change around benchmarks which really makes using historical benchmark data from a run on an old version for comparison to a recent modern run pointless. What is needed more is benchmark *rerunability* and *repeatability*. So that an old version of a Python implementation can be built and run the current benchmark suite today within the exact same environment as a current version of a python implementation. They key is that they ran the same thing on the same hardware in the same configuration at around the same time. Nothing else is a valid comparison as too many untracked unquantified variables have changed in the interim. Where the above clearly fails: creating historical trend graphs. If you want a setup that runs the benchmarks after every commit, or at least runs them as continuously as possible _that_ benchmark suite needs to be as immutable as possible. The machine on which they are run also needs to be locked down to have no updates applied and nothing else running on it *ever*. Whenever either the bechmark suite or the historical trend benchmark running os, distro or hardware is mutated it needs to be clearly noted so deltas at that time in the results can be flagged to mark a discontinuity in the trend data as being due to the external changes. ONE way to do this is always version benchmark names. Any time one is updated, give it a new versioned name so it can't be compared with past results. Otherwise for historical data, periodically rerunning the benchmark suite on older versions (releases and betas) for use in modern comparisons is ideal. -gps > And the second is the same question for libraries. For instance, the > unladen benchmarks have Django 1.1a0 as the version which is rather > ancient. And with 1.5 coming out with provisional Python 3 support I > obviously would like to update it. But the same questions as with > benchmarks crops up in reference to immutability. Another thing is that > 2to3 can't actually be ported using 2to3 ( > http://bugs.python.org/issue15834) and so that itself will require two > versions -- a 2.x version (probably from Python 2.7's stdlib) and a 3.x > version (from the 3.2 stdlib) -- which already starts to add interesting > issues for me in terms of comparing performance (e.g. I will have to > probably update the 2.7 code to use io.BytesIO instead of StringIO.StringIO > to be on more equal footing). Similar thing goes for html5lib which has > developed its Python 3 support separately from its Python 2 code. > > If we can't find a reasonable way to handle all of this then what I will > do is branch the unladen benchmarks for 2.x/3.x benchmarking, and then > create another branch of the benchmark suite to just be for Python 3.x so > that we can start fresh with a new set of benchmarks that will never change > themselves for benchmarking Python 3 itself. That would also mean we could > start of with whatever is needed from PyPy and unladen to have the optimal > benchmark runner for speed.python.org. > > _______________________________________________ > Speed mailing list > Speed at python.org > http://mail.python.org/mailman/listinfo/speed > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sat Sep 1 22:37:40 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 1 Sep 2012 22:37:40 +0200 Subject: [Speed] Are benchmarks and libraries mutable? In-Reply-To: References: Message-ID: On Sat, Sep 1, 2012 at 9:45 PM, Gregory P. Smith wrote: > > > On Sat, Sep 1, 2012 at 10:21 AM, Brett Cannon wrote: >> >> Now that I can run benchmarks against Python 2.7 and 3.3 simultaneously, >> I'm ready to start updating the benchmarks. This involves two parts. >> >> One is moving benchmarks from PyPy over to the unladen repo on >> hg.python.org/benchmarks. But I wanted to first make sure people don't view >> the benchmarks as immutable (e.g. as Octane does: >> https://developers.google.com/octane/faq). Since the benchmarks are always >> relative between two interpreters their immutability isn't critical compared >> to if we were to report some overall score. But it also means that any >> changes made would throw off historical comparisons. For instance, if I take >> PyPy's Mako benchmark (which does a lot more work), should it be named >> mako_v2, or should we just replace mako wholesale? > > > I dislike benchmark immutability. The rest of the world including your > local computing environment where benchmarks run continues to change around > benchmarks which really makes using historical benchmark data from a run on > an old version for comparison to a recent modern run pointless. So far we (pypy) managed to maintain enough of the environment under control to have meaningful historical data. We have the same machine and we monitor whether changes introduce something new or not. Of course ideally, it's impossible, but for real world what we're doing is good enough. Cheers, fijal From brett at python.org Sun Sep 2 00:10:34 2012 From: brett at python.org (Brett Cannon) Date: Sat, 1 Sep 2012 18:10:34 -0400 Subject: [Speed] Are benchmarks and libraries mutable? In-Reply-To: <20120901205716.2d3e9e20@pitrou.net> References: <20120901205716.2d3e9e20@pitrou.net> Message-ID: On Sat, Sep 1, 2012 at 2:57 PM, Antoine Pitrou wrote: > On Sat, 1 Sep 2012 13:21:36 -0400 > Brett Cannon wrote: > > > > One is moving benchmarks from PyPy over to the unladen repo on > > hg.python.org/benchmarks. But I wanted to first make sure people don't > view > > the benchmarks as immutable (e.g. as Octane does: > > https://developers.google.com/octane/faq). Since the benchmarks are > always > > relative between two interpreters their immutability isn't critical > > compared to if we were to report some overall score. But it also means > that > > any changes made would throw off historical comparisons. For instance, > if I > > take PyPy's Mako benchmark (which does a lot more work), should it be > named > > mako_v2, or should we just replace mako wholesale? > > mako_v2 sounds fine to me. Mutating benchmarks makes things confusing: > one person may report that interpreter A is faster than interpreter B > on a given benchmark, and another person retort that no, interpreter B > is faster than interpreter A. > > Besides, if you want to have useful timelines on speed.p.o, you > definitely need stable benchmarks. > > > And the second is the same question for libraries. For instance, the > > unladen benchmarks have Django 1.1a0 as the version which is rather > > ancient. And with 1.5 coming out with provisional Python 3 support I > > obviously would like to update it. But the same questions as with > > benchmarks crops up in reference to immutability. > > django_v2 sounds fine too :) > True, but having to carry around multiple copies of libraries just becomes a pain. > > > (e.g. I will have to probably update the 2.7 code to use > > io.BytesIO instead of StringIO.StringIO to be on more equal footing). > > I disagree. If io.BytesIO is faster than StringIO.StringIO then it's > normal for the benchmark results to reflect that (ditto if it's slower). > > > If we can't find a reasonable way to handle all of this then what I will > do > > is branch the unladen benchmarks for 2.x/3.x benchmarking, and then > create > > another branch of the benchmark suite to just be for Python 3.x so that > we > > can start fresh with a new set of benchmarks that will never change > > themselves for benchmarking Python 3 itself. > > Why not simply add Python 3-specific benchmarks to the mix? > You can then create a "py3" benchmark suite in perf.py (and perhaps > also a "py2" one). > To avoid historical baggage and to start from a clean slate. I don't necessarily want to carry around Python 2 benchmarks forever. It's not a massive concern, just a nicety. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Sep 2 00:15:14 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 2 Sep 2012 00:15:14 +0200 Subject: [Speed] Are benchmarks and libraries mutable? References: <20120901205716.2d3e9e20@pitrou.net> Message-ID: <20120902001514.59107ba0@pitrou.net> On Sat, 1 Sep 2012 18:10:34 -0400 Brett Cannon wrote: > True, but having to carry around multiple copies of libraries just becomes > a pain. Apart from the initial pain of adding code to deal with multiple copies, I don't see how painful "carrying" copies can be. It's not like you have to physically carry the files :) > > > If we can't find a reasonable way to handle all of this then what I will > > do > > > is branch the unladen benchmarks for 2.x/3.x benchmarking, and then > > create > > > another branch of the benchmark suite to just be for Python 3.x so that > > we > > > can start fresh with a new set of benchmarks that will never change > > > themselves for benchmarking Python 3 itself. > > > > Why not simply add Python 3-specific benchmarks to the mix? > > You can then create a "py3" benchmark suite in perf.py (and perhaps > > also a "py2" one). > > > > To avoid historical baggage and to start from a clean slate. I don't > necessarily want to carry around Python 2 benchmarks forever. It's not a > massive concern, just a nicety. We can decide to remove some benchmarks when they become too old. It's no reason to fork a separate development branch, though. Most of the current benchmarks already run under 3.x so you would just duplicate maintenance work. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From fijall at gmail.com Sun Sep 2 09:39:27 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 2 Sep 2012 09:39:27 +0200 Subject: [Speed] Are benchmarks and libraries mutable? In-Reply-To: References: <20120901205716.2d3e9e20@pitrou.net> Message-ID: On Sun, Sep 2, 2012 at 12:10 AM, Brett Cannon wrote: > > > On Sat, Sep 1, 2012 at 2:57 PM, Antoine Pitrou wrote: >> >> On Sat, 1 Sep 2012 13:21:36 -0400 >> Brett Cannon wrote: >> > >> > One is moving benchmarks from PyPy over to the unladen repo on >> > hg.python.org/benchmarks. But I wanted to first make sure people don't >> > view >> > the benchmarks as immutable (e.g. as Octane does: >> > https://developers.google.com/octane/faq). Since the benchmarks are >> > always >> > relative between two interpreters their immutability isn't critical >> > compared to if we were to report some overall score. But it also means >> > that >> > any changes made would throw off historical comparisons. For instance, >> > if I >> > take PyPy's Mako benchmark (which does a lot more work), should it be >> > named >> > mako_v2, or should we just replace mako wholesale? >> >> mako_v2 sounds fine to me. Mutating benchmarks makes things confusing: >> one person may report that interpreter A is faster than interpreter B >> on a given benchmark, and another person retort that no, interpreter B >> is faster than interpreter A. >> >> Besides, if you want to have useful timelines on speed.p.o, you >> definitely need stable benchmarks. >> >> > And the second is the same question for libraries. For instance, the >> > unladen benchmarks have Django 1.1a0 as the version which is rather >> > ancient. And with 1.5 coming out with provisional Python 3 support I >> > obviously would like to update it. But the same questions as with >> > benchmarks crops up in reference to immutability. >> >> django_v2 sounds fine too :) > > > True, but having to carry around multiple copies of libraries just becomes a > pain. You just kill django when you introduce django v2 (alternatively you remove the history and keep the name django). Historical outdated benchmarks are not as interesting. > >> >> >> > (e.g. I will have to probably update the 2.7 code to use >> > io.BytesIO instead of StringIO.StringIO to be on more equal footing). >> >> I disagree. If io.BytesIO is faster than StringIO.StringIO then it's >> normal for the benchmark results to reflect that (ditto if it's slower). >> >> > If we can't find a reasonable way to handle all of this then what I will >> > do >> > is branch the unladen benchmarks for 2.x/3.x benchmarking, and then >> > create >> > another branch of the benchmark suite to just be for Python 3.x so that >> > we >> > can start fresh with a new set of benchmarks that will never change >> > themselves for benchmarking Python 3 itself. >> >> Why not simply add Python 3-specific benchmarks to the mix? >> You can then create a "py3" benchmark suite in perf.py (and perhaps >> also a "py2" one). > > > To avoid historical baggage and to start from a clean slate. I don't > necessarily want to carry around Python 2 benchmarks forever. It's not a > massive concern, just a nicety. If you guys want to have any cooperation with us, you have to carry Python 2 benchmarks for indefinite amount of time. Cheers, fijal From brett at python.org Thu Sep 13 00:35:57 2012 From: brett at python.org (Brett Cannon) Date: Wed, 12 Sep 2012 18:35:57 -0400 Subject: [Speed] What PyPy benchmarks are (un)important? Message-ID: I went through the list of benchmarks that PyPy has to see which ones could be ported to Python 3 now (others can be in the future but they depend on a project who has not released an official version with python 3 support): ai chaos fannkuch float meteor-contest nbody_modified richards spectral-norm telco bm_chameleon* bm_mako go hexiom2 json_bench pidigits pyflate-fast raytrace-simple sphinx* The first grouping is the 20 shown on the speed.pypy.org homepage, the rest are in the complete list. Anything with an asterisk has an external dependency that is not already in the unladen benchmarks. Are the twenty shown on the homepage of speed.pypy.org in some way special, or were they the first benchmarks that you were good/bad at, or what? Are there any benchmarks here that are particularly good or bad? I'm trying to prioritize what benchmarks I port so that if I hit a time crunch I got the critical ones moved first. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Thu Sep 13 11:09:41 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 13 Sep 2012 11:09:41 +0200 Subject: [Speed] What PyPy benchmarks are (un)important? In-Reply-To: References: Message-ID: On Thu, Sep 13, 2012 at 12:35 AM, Brett Cannon wrote: > I went through the list of benchmarks that PyPy has to see which ones could > be ported to Python 3 now (others can be in the future but they depend on a > project who has not released an official version with python 3 support): > > ai > chaos > fannkuch > float > meteor-contest > nbody_modified > richards > spectral-norm > telco > > bm_chameleon* > bm_mako > go > hexiom2 > json_bench > pidigits > pyflate-fast > raytrace-simple > sphinx* > > The first grouping is the 20 shown on the speed.pypy.org homepage, the rest > are in the complete list. Anything with an asterisk has an external > dependency that is not already in the unladen benchmarks. > > Are the twenty shown on the homepage of speed.pypy.org in some way special, > or were they the first benchmarks that you were good/bad at, or what? Are > there any benchmarks here that are particularly good or bad? I'm trying to > prioritize what benchmarks I port so that if I hit a time crunch I got the > critical ones moved first. The 20 shown on the front page are the ones that we have full historical data, so we can compare. Others are simply newer. I don't think there is any priority associated, we should probably put the others on the first page despite not having full data. From brett at python.org Fri Sep 14 22:19:39 2012 From: brett at python.org (Brett Cannon) Date: Fri, 14 Sep 2012 16:19:39 -0400 Subject: [Speed] standalone PyPy benchmarks ported Message-ID: So I managed to get the following benchmarks moved into the unladen repo (not pushed yet until I figure out some reasonable scaling values as some finish probably too fast and others go for a while): chaos fannkuch meteor-contest (renamed meteor_contest) spectral-norm (renamed spectral_norm) telco bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this benchmark) go hexiom2 json_bench (renamed json_dump_v2) raytrace_simple (renamed raytrace) Most of the porting was range/xrange related. After that is was str/unicode. I also stopped having the benchmarks write out files as it was always to verify results and not a core part of the benchmark. That leaves us with the benchmarks that rely on third-party projects. The chameleon benchmark can probably be ported as chameleon has a version released running on Python 3. But django and html5lib have only in-development versions that support Python 3. If we want to pull in the tip of their repos then those benchmarks can also be ported now rather than later. People have opinions on in-dev code vs. released for benchmarking? There is also the sphinx benchmark, but that requires getting CPython's docs building under Python 3 (see http://bugs.python.org/issue10224). -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Sep 14 23:44:31 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 14 Sep 2012 23:44:31 +0200 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon wrote: > So I managed to get the following benchmarks moved into the unladen repo > (not pushed yet until I figure out some reasonable scaling values as some > finish probably too fast and others go for a while): > > chaos > fannkuch > meteor-contest (renamed meteor_contest) > spectral-norm (renamed spectral_norm) > telco > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this benchmark) > go > hexiom2 > json_bench (renamed json_dump_v2) > raytrace_simple (renamed raytrace) > > Most of the porting was range/xrange related. After that is was str/unicode. > I also stopped having the benchmarks write out files as it was always to > verify results and not a core part of the benchmark. > > That leaves us with the benchmarks that rely on third-party projects. The > chameleon benchmark can probably be ported as chameleon has a version > released running on Python 3. But django and html5lib have only > in-development versions that support Python 3. If we want to pull in the tip > of their repos then those benchmarks can also be ported now rather than > later. People have opinions on in-dev code vs. released for benchmarking? > > There is also the sphinx benchmark, but that requires getting CPython's docs > building under Python 3 (see http://bugs.python.org/issue10224). > > _______________________________________________ > Speed mailing list > Speed at python.org > http://mail.python.org/mailman/listinfo/speed > great job! From brett at python.org Sat Sep 15 00:44:09 2012 From: brett at python.org (Brett Cannon) Date: Fri, 14 Sep 2012 18:44:09 -0400 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: I just pushed the changes. On Fri, Sep 14, 2012 at 4:19 PM, Brett Cannon wrote: > So I managed to get the following benchmarks moved into the unladen repo > (not pushed yet until I figure out some reasonable scaling values as some > finish probably too fast and others go for a while): > > chaos > fannkuch > meteor-contest (renamed meteor_contest) > spectral-norm (renamed spectral_norm) > telco > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this benchmark) > go > hexiom2 > json_bench (renamed json_dump_v2) > raytrace_simple (renamed raytrace) > > Most of the porting was range/xrange related. After that is was > str/unicode. I also stopped having the benchmarks write out files as it was > always to verify results and not a core part of the benchmark. > > That leaves us with the benchmarks that rely on third-party projects. The > chameleon benchmark can probably be ported as chameleon has a version > released running on Python 3. But django and html5lib have only > in-development versions that support Python 3. If we want to pull in the > tip of their repos then those benchmarks can also be ported now rather than > later. People have opinions on in-dev code vs. released for benchmarking? > > There is also the sphinx benchmark, but that requires getting CPython's > docs building under Python 3 (see http://bugs.python.org/issue10224). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Sep 16 16:43:02 2012 From: brett at python.org (Brett Cannon) Date: Sun, 16 Sep 2012 10:43:02 -0400 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: Quick question about the hexiom2 benchmark: what does it measure? It is by far the slowest benchmark I ported, and considering it isn't a real-world app benchmark I want to make sure the slowness of it is worth it. Otherwise I would rather drop it since having something run 1/25 as many iterations compared to the other simple benchmarks seems to water down its robustness. On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski wrote: > On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon wrote: > > So I managed to get the following benchmarks moved into the unladen repo > > (not pushed yet until I figure out some reasonable scaling values as some > > finish probably too fast and others go for a while): > > > > chaos > > fannkuch > > meteor-contest (renamed meteor_contest) > > spectral-norm (renamed spectral_norm) > > telco > > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this > benchmark) > > go > > hexiom2 > > json_bench (renamed json_dump_v2) > > raytrace_simple (renamed raytrace) > > > > Most of the porting was range/xrange related. After that is was > str/unicode. > > I also stopped having the benchmarks write out files as it was always to > > verify results and not a core part of the benchmark. > > > > That leaves us with the benchmarks that rely on third-party projects. The > > chameleon benchmark can probably be ported as chameleon has a version > > released running on Python 3. But django and html5lib have only > > in-development versions that support Python 3. If we want to pull in the > tip > > of their repos then those benchmarks can also be ported now rather than > > later. People have opinions on in-dev code vs. released for benchmarking? > > > > There is also the sphinx benchmark, but that requires getting CPython's > docs > > building under Python 3 (see http://bugs.python.org/issue10224). > > > > _______________________________________________ > > Speed mailing list > > Speed at python.org > > http://mail.python.org/mailman/listinfo/speed > > > > great job! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Sun Sep 16 16:54:37 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sun, 16 Sep 2012 16:54:37 +0200 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon wrote: > Quick question about the hexiom2 benchmark: what does it measure? It is by > far the slowest benchmark I ported, and considering it isn't a real-world > app benchmark I want to make sure the slowness of it is worth it. Otherwise > I would rather drop it since having something run 1/25 as many iterations > compared to the other simple benchmarks seems to water down its robustness. It's a puzzle solver. It got included because PyPy 1.9 got slower than 1.8 on this particular benchmark that people were actually running somewhere, so it has *some* value. I wonder, does adding a fixed random number seed help the distribution? > > > On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski > wrote: >> >> On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon wrote: >> > So I managed to get the following benchmarks moved into the unladen repo >> > (not pushed yet until I figure out some reasonable scaling values as >> > some >> > finish probably too fast and others go for a while): >> > >> > chaos >> > fannkuch >> > meteor-contest (renamed meteor_contest) >> > spectral-norm (renamed spectral_norm) >> > telco >> > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this >> > benchmark) >> > go >> > hexiom2 >> > json_bench (renamed json_dump_v2) >> > raytrace_simple (renamed raytrace) >> > >> > Most of the porting was range/xrange related. After that is was >> > str/unicode. >> > I also stopped having the benchmarks write out files as it was always to >> > verify results and not a core part of the benchmark. >> > >> > That leaves us with the benchmarks that rely on third-party projects. >> > The >> > chameleon benchmark can probably be ported as chameleon has a version >> > released running on Python 3. But django and html5lib have only >> > in-development versions that support Python 3. If we want to pull in the >> > tip >> > of their repos then those benchmarks can also be ported now rather than >> > later. People have opinions on in-dev code vs. released for >> > benchmarking? >> > >> > There is also the sphinx benchmark, but that requires getting CPython's >> > docs >> > building under Python 3 (see http://bugs.python.org/issue10224). >> > >> > _______________________________________________ >> > Speed mailing list >> > Speed at python.org >> > http://mail.python.org/mailman/listinfo/speed >> > >> >> great job! > > From brett at python.org Mon Sep 17 17:00:47 2012 From: brett at python.org (Brett Cannon) Date: Mon, 17 Sep 2012 11:00:47 -0400 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: On Sun, Sep 16, 2012 at 10:54 AM, Maciej Fijalkowski wrote: > On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon wrote: > > Quick question about the hexiom2 benchmark: what does it measure? It is > by > > far the slowest benchmark I ported, and considering it isn't a real-world > > app benchmark I want to make sure the slowness of it is worth it. > Otherwise > > I would rather drop it since having something run 1/25 as many iterations > > compared to the other simple benchmarks seems to water down its > robustness. > > It's a puzzle solver. It got included because PyPy 1.9 got slower than > 1.8 on this particular benchmark that people were actually running > somewhere, so it has *some* value. Fair enough. Just wanted to make sure that it was worth having a slow execution over. > I wonder, does adding a fixed > random number seed help the distribution? > Fix how? hexiom2 doesn't use a random value for anything. -Brett > > > > > > > On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski > > wrote: > >> > >> On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon > wrote: > >> > So I managed to get the following benchmarks moved into the unladen > repo > >> > (not pushed yet until I figure out some reasonable scaling values as > >> > some > >> > finish probably too fast and others go for a while): > >> > > >> > chaos > >> > fannkuch > >> > meteor-contest (renamed meteor_contest) > >> > spectral-norm (renamed spectral_norm) > >> > telco > >> > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this > >> > benchmark) > >> > go > >> > hexiom2 > >> > json_bench (renamed json_dump_v2) > >> > raytrace_simple (renamed raytrace) > >> > > >> > Most of the porting was range/xrange related. After that is was > >> > str/unicode. > >> > I also stopped having the benchmarks write out files as it was always > to > >> > verify results and not a core part of the benchmark. > >> > > >> > That leaves us with the benchmarks that rely on third-party projects. > >> > The > >> > chameleon benchmark can probably be ported as chameleon has a version > >> > released running on Python 3. But django and html5lib have only > >> > in-development versions that support Python 3. If we want to pull in > the > >> > tip > >> > of their repos then those benchmarks can also be ported now rather > than > >> > later. People have opinions on in-dev code vs. released for > >> > benchmarking? > >> > > >> > There is also the sphinx benchmark, but that requires getting > CPython's > >> > docs > >> > building under Python 3 (see http://bugs.python.org/issue10224). > >> > > >> > _______________________________________________ > >> > Speed mailing list > >> > Speed at python.org > >> > http://mail.python.org/mailman/listinfo/speed > >> > > >> > >> great job! > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Sep 17 17:36:48 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Sep 2012 17:36:48 +0200 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: On Mon, Sep 17, 2012 at 5:00 PM, Brett Cannon wrote: > > > On Sun, Sep 16, 2012 at 10:54 AM, Maciej Fijalkowski > wrote: >> >> On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon wrote: >> > Quick question about the hexiom2 benchmark: what does it measure? It is >> > by >> > far the slowest benchmark I ported, and considering it isn't a >> > real-world >> > app benchmark I want to make sure the slowness of it is worth it. >> > Otherwise >> > I would rather drop it since having something run 1/25 as many >> > iterations >> > compared to the other simple benchmarks seems to water down its >> > robustness. >> >> It's a puzzle solver. It got included because PyPy 1.9 got slower than >> 1.8 on this particular benchmark that people were actually running >> somewhere, so it has *some* value. > > > Fair enough. Just wanted to make sure that it was worth having a slow > execution over. > >> >> I wonder, does adding a fixed >> random number seed help the distribution? > > > Fix how? hexiom2 doesn't use a random value for anything. Ok, then please explain why having 1/25th of iterations kill robustness? > > -Brett > >> >> >> > >> > >> > On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski >> > wrote: >> >> >> >> On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon >> >> wrote: >> >> > So I managed to get the following benchmarks moved into the unladen >> >> > repo >> >> > (not pushed yet until I figure out some reasonable scaling values as >> >> > some >> >> > finish probably too fast and others go for a while): >> >> > >> >> > chaos >> >> > fannkuch >> >> > meteor-contest (renamed meteor_contest) >> >> > spectral-norm (renamed spectral_norm) >> >> > telco >> >> > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this >> >> > benchmark) >> >> > go >> >> > hexiom2 >> >> > json_bench (renamed json_dump_v2) >> >> > raytrace_simple (renamed raytrace) >> >> > >> >> > Most of the porting was range/xrange related. After that is was >> >> > str/unicode. >> >> > I also stopped having the benchmarks write out files as it was always >> >> > to >> >> > verify results and not a core part of the benchmark. >> >> > >> >> > That leaves us with the benchmarks that rely on third-party projects. >> >> > The >> >> > chameleon benchmark can probably be ported as chameleon has a version >> >> > released running on Python 3. But django and html5lib have only >> >> > in-development versions that support Python 3. If we want to pull in >> >> > the >> >> > tip >> >> > of their repos then those benchmarks can also be ported now rather >> >> > than >> >> > later. People have opinions on in-dev code vs. released for >> >> > benchmarking? >> >> > >> >> > There is also the sphinx benchmark, but that requires getting >> >> > CPython's >> >> > docs >> >> > building under Python 3 (see http://bugs.python.org/issue10224). >> >> > >> >> > _______________________________________________ >> >> > Speed mailing list >> >> > Speed at python.org >> >> > http://mail.python.org/mailman/listinfo/speed >> >> > >> >> >> >> great job! >> > >> > > > From brett at python.org Mon Sep 17 20:17:12 2012 From: brett at python.org (Brett Cannon) Date: Mon, 17 Sep 2012 14:17:12 -0400 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: On Mon, Sep 17, 2012 at 11:36 AM, Maciej Fijalkowski wrote: > On Mon, Sep 17, 2012 at 5:00 PM, Brett Cannon wrote: > > > > > > On Sun, Sep 16, 2012 at 10:54 AM, Maciej Fijalkowski > > wrote: > >> > >> On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon wrote: > >> > Quick question about the hexiom2 benchmark: what does it measure? It > is > >> > by > >> > far the slowest benchmark I ported, and considering it isn't a > >> > real-world > >> > app benchmark I want to make sure the slowness of it is worth it. > >> > Otherwise > >> > I would rather drop it since having something run 1/25 as many > >> > iterations > >> > compared to the other simple benchmarks seems to water down its > >> > robustness. > >> > >> It's a puzzle solver. It got included because PyPy 1.9 got slower than > >> 1.8 on this particular benchmark that people were actually running > >> somewhere, so it has *some* value. > > > > > > Fair enough. Just wanted to make sure that it was worth having a slow > > execution over. > > > >> > >> I wonder, does adding a fixed > >> random number seed help the distribution? > > > > > > Fix how? hexiom2 doesn't use a random value for anything. > > Ok, then please explain why having 1/25th of iterations kill robustness? > Less iterations to help smooth out any bumps in the measurements. E.g 4 iterations compared to 100 doesn't lead to as even of a measurement. I mean you would hope because the benchmark goes for so long that it would just level out within a single run instead of needing multiple runs to get the same evening out. -Brett > > > > > -Brett > > > >> > >> > >> > > >> > > >> > On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski > > >> > wrote: > >> >> > >> >> On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon > >> >> wrote: > >> >> > So I managed to get the following benchmarks moved into the unladen > >> >> > repo > >> >> > (not pushed yet until I figure out some reasonable scaling values > as > >> >> > some > >> >> > finish probably too fast and others go for a while): > >> >> > > >> >> > chaos > >> >> > fannkuch > >> >> > meteor-contest (renamed meteor_contest) > >> >> > spectral-norm (renamed spectral_norm) > >> >> > telco > >> >> > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this > >> >> > benchmark) > >> >> > go > >> >> > hexiom2 > >> >> > json_bench (renamed json_dump_v2) > >> >> > raytrace_simple (renamed raytrace) > >> >> > > >> >> > Most of the porting was range/xrange related. After that is was > >> >> > str/unicode. > >> >> > I also stopped having the benchmarks write out files as it was > always > >> >> > to > >> >> > verify results and not a core part of the benchmark. > >> >> > > >> >> > That leaves us with the benchmarks that rely on third-party > projects. > >> >> > The > >> >> > chameleon benchmark can probably be ported as chameleon has a > version > >> >> > released running on Python 3. But django and html5lib have only > >> >> > in-development versions that support Python 3. If we want to pull > in > >> >> > the > >> >> > tip > >> >> > of their repos then those benchmarks can also be ported now rather > >> >> > than > >> >> > later. People have opinions on in-dev code vs. released for > >> >> > benchmarking? > >> >> > > >> >> > There is also the sphinx benchmark, but that requires getting > >> >> > CPython's > >> >> > docs > >> >> > building under Python 3 (see http://bugs.python.org/issue10224). > >> >> > > >> >> > _______________________________________________ > >> >> > Speed mailing list > >> >> > Speed at python.org > >> >> > http://mail.python.org/mailman/listinfo/speed > >> >> > > >> >> > >> >> great job! > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Sep 17 22:02:35 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 17 Sep 2012 22:02:35 +0200 Subject: [Speed] standalone PyPy benchmarks ported In-Reply-To: References: Message-ID: On Mon, Sep 17, 2012 at 8:17 PM, Brett Cannon wrote: > > > On Mon, Sep 17, 2012 at 11:36 AM, Maciej Fijalkowski > wrote: >> >> On Mon, Sep 17, 2012 at 5:00 PM, Brett Cannon wrote: >> > >> > >> > On Sun, Sep 16, 2012 at 10:54 AM, Maciej Fijalkowski >> > wrote: >> >> >> >> On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon wrote: >> >> > Quick question about the hexiom2 benchmark: what does it measure? It >> >> > is >> >> > by >> >> > far the slowest benchmark I ported, and considering it isn't a >> >> > real-world >> >> > app benchmark I want to make sure the slowness of it is worth it. >> >> > Otherwise >> >> > I would rather drop it since having something run 1/25 as many >> >> > iterations >> >> > compared to the other simple benchmarks seems to water down its >> >> > robustness. >> >> >> >> It's a puzzle solver. It got included because PyPy 1.9 got slower than >> >> 1.8 on this particular benchmark that people were actually running >> >> somewhere, so it has *some* value. >> > >> > >> > Fair enough. Just wanted to make sure that it was worth having a slow >> > execution over. >> > >> >> >> >> I wonder, does adding a fixed >> >> random number seed help the distribution? >> > >> > >> > Fix how? hexiom2 doesn't use a random value for anything. >> >> Ok, then please explain why having 1/25th of iterations kill robustness? > > > Less iterations to help smooth out any bumps in the measurements. E.g 4 > iterations compared to 100 doesn't lead to as even of a measurement. I mean > you would hope because the benchmark goes for so long that it would just > level out within a single run instead of needing multiple runs to get the > same evening out. Yes precisely :) I think the term "iterations" is a bit overloaded. The "stability" is more important. > > -Brett > >> >> >> > >> > -Brett >> > >> >> >> >> >> >> > >> >> > >> >> > On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski >> >> > >> >> > wrote: >> >> >> >> >> >> On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon >> >> >> wrote: >> >> >> > So I managed to get the following benchmarks moved into the >> >> >> > unladen >> >> >> > repo >> >> >> > (not pushed yet until I figure out some reasonable scaling values >> >> >> > as >> >> >> > some >> >> >> > finish probably too fast and others go for a while): >> >> >> > >> >> >> > chaos >> >> >> > fannkuch >> >> >> > meteor-contest (renamed meteor_contest) >> >> >> > spectral-norm (renamed spectral_norm) >> >> >> > telco >> >> >> > bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this >> >> >> > benchmark) >> >> >> > go >> >> >> > hexiom2 >> >> >> > json_bench (renamed json_dump_v2) >> >> >> > raytrace_simple (renamed raytrace) >> >> >> > >> >> >> > Most of the porting was range/xrange related. After that is was >> >> >> > str/unicode. >> >> >> > I also stopped having the benchmarks write out files as it was >> >> >> > always >> >> >> > to >> >> >> > verify results and not a core part of the benchmark. >> >> >> > >> >> >> > That leaves us with the benchmarks that rely on third-party >> >> >> > projects. >> >> >> > The >> >> >> > chameleon benchmark can probably be ported as chameleon has a >> >> >> > version >> >> >> > released running on Python 3. But django and html5lib have only >> >> >> > in-development versions that support Python 3. If we want to pull >> >> >> > in >> >> >> > the >> >> >> > tip >> >> >> > of their repos then those benchmarks can also be ported now rather >> >> >> > than >> >> >> > later. People have opinions on in-dev code vs. released for >> >> >> > benchmarking? >> >> >> > >> >> >> > There is also the sphinx benchmark, but that requires getting >> >> >> > CPython's >> >> >> > docs >> >> >> > building under Python 3 (see http://bugs.python.org/issue10224). >> >> >> > >> >> >> > _______________________________________________ >> >> >> > Speed mailing list >> >> >> > Speed at python.org >> >> >> > http://mail.python.org/mailman/listinfo/speed >> >> >> > >> >> >> >> >> >> great job! >> >> > >> >> > >> > >> > > > From tobami at gmail.com Tue Sep 18 20:22:54 2012 From: tobami at gmail.com (Miquel Torres) Date: Tue, 18 Sep 2012 20:22:54 +0200 Subject: [Speed] What PyPy benchmarks are (un)important? In-Reply-To: References: Message-ID: We could easily add the newer benchmarks to the front page, easy to do for the first plot. But the historical plot depends on there being data across all versions, so *that* geometric average would need to be done on the common set of 20 benchmarks for all PyPy versions, which means it will be different from the first average. Alternatively, we can drop the older PyPy versions and start with the oldest one that has data for the full set of benchmarks. Any other ideas? Miquel 2012/9/13 Maciej Fijalkowski : > On Thu, Sep 13, 2012 at 12:35 AM, Brett Cannon wrote: >> I went through the list of benchmarks that PyPy has to see which ones could >> be ported to Python 3 now (others can be in the future but they depend on a >> project who has not released an official version with python 3 support): >> >> ai >> chaos >> fannkuch >> float >> meteor-contest >> nbody_modified >> richards >> spectral-norm >> telco >> >> bm_chameleon* >> bm_mako >> go >> hexiom2 >> json_bench >> pidigits >> pyflate-fast >> raytrace-simple >> sphinx* >> >> The first grouping is the 20 shown on the speed.pypy.org homepage, the rest >> are in the complete list. Anything with an asterisk has an external >> dependency that is not already in the unladen benchmarks. >> >> Are the twenty shown on the homepage of speed.pypy.org in some way special, >> or were they the first benchmarks that you were good/bad at, or what? Are >> there any benchmarks here that are particularly good or bad? I'm trying to >> prioritize what benchmarks I port so that if I hit a time crunch I got the >> critical ones moved first. > > The 20 shown on the front page are the ones that we have full > historical data, so we can compare. Others are simply newer. > > I don't think there is any priority associated, we should probably put > the others on the first page despite not having full data. > _______________________________________________ > Speed mailing list > Speed at python.org > http://mail.python.org/mailman/listinfo/speed From fijall at gmail.com Sat Sep 22 19:11:41 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 22 Sep 2012 19:11:41 +0200 Subject: [Speed] Speed and disk space Message-ID: Hello everyone. I would like to complain that speed has still no disk space, despite having a huge empty disk space on no partition chilling there. Last time I complained was July 13th and it's a little over 2 months by now. Cheers, fijal From tobami at gmail.com Thu Sep 13 14:04:31 2012 From: tobami at gmail.com (Miquel Torres) Date: Thu, 13 Sep 2012 12:04:31 -0000 Subject: [Speed] What PyPy benchmarks are (un)important? In-Reply-To: References: Message-ID: We could easily add the newer benchmarks to the front page, easy to do for the first plot. But the historical plot depends on there being data across all versions, so *that* geometric average would need to be done on the common set of 20 benchmarks for all PyPy versions, which means it will be different from the first average. Alternatively, we can drop the older PyPy versions and start with the oldest one that has data for the full set of benchmarks. Any other ideas? Miquel 2012/9/13 Maciej Fijalkowski : > On Thu, Sep 13, 2012 at 12:35 AM, Brett Cannon wrote: >> I went through the list of benchmarks that PyPy has to see which ones could >> be ported to Python 3 now (others can be in the future but they depend on a >> project who has not released an official version with python 3 support): >> >> ai >> chaos >> fannkuch >> float >> meteor-contest >> nbody_modified >> richards >> spectral-norm >> telco >> >> bm_chameleon* >> bm_mako >> go >> hexiom2 >> json_bench >> pidigits >> pyflate-fast >> raytrace-simple >> sphinx* >> >> The first grouping is the 20 shown on the speed.pypy.org homepage, the rest >> are in the complete list. Anything with an asterisk has an external >> dependency that is not already in the unladen benchmarks. >> >> Are the twenty shown on the homepage of speed.pypy.org in some way special, >> or were they the first benchmarks that you were good/bad at, or what? Are >> there any benchmarks here that are particularly good or bad? I'm trying to >> prioritize what benchmarks I port so that if I hit a time crunch I got the >> critical ones moved first. > > The 20 shown on the front page are the ones that we have full > historical data, so we can compare. Others are simply newer. > > I don't think there is any priority associated, we should probably put > the others on the first page despite not having full data. > _______________________________________________ > Speed mailing list > Speed at python.org > http://mail.python.org/mailman/listinfo/speed