[Speed] performance 0.5.5 and perf 1.3 released

Antoine Pitrou solipsis at pitrou.net
Mon May 29 13:10:00 EDT 2017


Also, to expand a bit on what I'm trying to say: like you, I have my own
idea of which benchmarks are pointless and unrepresentative, but when
maintaining the former benchmarks suite I usually refrained from
removing those benchmarks, out of prudence and respect for the people
who had written them (and probably had their reasons for finding those
benchmarks useful).

Regards

Antoine.


On Mon, 29 May 2017 19:00:22 +0200
Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Mon, 29 May 2017 18:49:37 +0200
> Victor Stinner <victor.stinner at gmail.com>
> wrote:
> > * The ``float`` benchmark now uses ``__slots__`` on the ``Point`` class.  
> 
> So the benchmark numbers are not comparable with previously generated
> ones?
> 
> > * Remove the following microbenchmarks. They have been moved to the
> >   `pymicrobench <https://github.com/haypo/pymicrobench>`_ project because
> >   they are too short, not representative of real applications and are too
> >   unstable.
> >   
> [...]
> >   - ``logging_silent``: values are faster than 1 ns on PyPy with 2^27 loops!
> >     (and around 0.7 us on CPython)  
> 
> The performance of silent logging calls is actually important for
> all applications which have debug() calls in their critical paths.
> This is quite common in network and/or distributed programming where you
> want to allow logging many events for diagnosis of unexpected runtime
> issues (because many unexpected conditions can appear), but with
> those logs disabled by default for performance and readability reasons.
> 
> This is no more a micro-benchmark than is, say, pickling or JSON
> encoding; and much less so than solving the N-body problem in pure
> Python without Numpy...
> 
> > * Update requirements
> > 
> >   - Django: 1.11 => 1.11.1
> >   - SQLAlchemy: 1.1.9 => 1.1.10
> >   - certifi: 2017.1.23 => 2017.4.17
> >   - perf: 1.2 => 1.3
> >   - mercurial: 4.1.2 => 4.2
> >   - tornado: 4.4.3 => 4.5.1  
> 
> Are those requirements for the benchmark runner or for the benchmarks
> themselves?  If the latter, won't updating the requirements make
> benchmark numbers non-comparable with those generated by previous
> versions?  This is something that the previous benchmarks suite tried
> to above by using pinned versions of 3rd party libraries.
> 
> Regards
> 
> Antoine.





More information about the Speed mailing list