[Speed] New instance of CodeSpeed at speed.python.org running performance on CPython and PyPy?

Zachary Ware zachary.ware+pydev at gmail.com
Thu Sep 1 12:33:34 EDT 2016


On Thu, Sep 1, 2016 at 5:58 AM, Victor Stinner <victor.stinner at gmail.com> wrote:
> Hi,
>
> Would it be possible to run a new instance of CodeSpeed (the website
> behing speed.python.org) which would run the "performance" benchmark
> suite rather than the "benchmarks" benchmark suite? And would it be
> possible to run it on CPython (2.7 and 3.5 branches) and PyPy (master
> branch, maybe also the py3k branch)?

Short answer is yes, please :).  Slightly longer answer is that that's
the plan, but I don't know when I'll have opportunity to work on it.
Possibly next week at the sprint, we'll see.

> I found https://github.com/tobami/codespeed/ but I didn't look at it
> right now. I guess that some code should be written to convert perf
> JSON file to the format expected by CodeSpeed?

The code that's actually running speed.python.org is at
https://github.com/zware/codespeed, speed.python.org branch.  I've
been meaning to get that moved to https://github.com/python/codespeed,
but it hasn't happened yet.  Other relevant code is hidden in the
buildbot master and on the runner box itself, which is not publicly
version controlled (which is bad).

We will need either a translation layer between performance and
CodeSpeed, or if we can, just change the format that performance
outputs to match what CodeSpeed expects.

> FYI I released performance 0.2 yesterday. JSON files now contain the
> version of the benchmark suite ("performance_version: 0.2"). I plan to
> use semantic version: increase the major version (ex: upgrade to 0.3,
> but later it will be 1.x, 2.x, etc.) when benchmark results are
> considered to not be compatible.
>
> For example, I upgraded Django (from 1.9 to 1.10) and Chameleon (from
> 2.22 to 2.24) in performance 0.2.
>
> The question is how to upgrade the performance to a new major version:
> should we drop previous benchmark results?
>
> Maybe we should put the performance version in the URL, and use
> "/latest/" by default. Only /latest/ would get new results, and
> /latest/ would restart from an empty set of results when performance
> is upgraded?

I have only enough experience with Django and CodeSpeed to have gotten
speed.python.org to the state that it currently is, so I really don't
know how (un)limited the possibilities are.  One simple method would
be to combine the benchmark name with the performance version, and
periodically clear out old benchmark results.

> Another option, less exciting, is to never upgrade benchmarks. The
> benchmarks project *added* new benchmarks when a dependency was
> "upgraded". In fact, the old dependency was kept and a new dependency
> (full copy of the code in fact ;-)) was added. So it has django,
> django_v2, django_v3, etc. The problem is that it still uses Mercurial
> 1.2 which was released 7 years ago (2009)... Since it's painful to
> upgrade, most dependencies were outdated.

I agree that we should have the ability to easily update benchmarks
and actually do so sometimes.

> Do you care of old benchmark results? It's quite easy to regenerate
> them (on demand?) if needed, no? Using Mercurial and Git, it's easy to
> update to any old revisions to run again a benchmark on an old version
> of CPython / PyPy / etc.

I suggest that upon updates to the benchmark suite/runner/etc., we
should clear out old results and rerun the benchmarks on a selection
of released versions of each interpreter.  We should also have some
way to trigger a run of the benchmarks on a particular revision of an
interpreter.

-- 
Zach


More information about the Speed mailing list