[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

STINNER Victor report at bugs.python.org
Thu Feb 4 08:33:05 EST 2016


STINNER Victor added the comment:

Florin Papa added the comment:
> I ran perf to use calibration and there is no difference in stability
> compared to the unpatched version.

Sorry, what are you calling "stability"? For me, stability means that
you run the same benchmark 3, 5 or 10 times, and the results must be
as close as possible: see variance and standard deviation of my
previous message.

I'm not talking of variance/deviation of the N runs of bm_xxx.py
scripts, but variance/deviation of the mean value displayed by
perf.py.

perf_calibration.patch is a proof-of-concept. I changed the number of
runs from 50 to 10 to test my patch more easily. You should modify the
patch to keep 50 runs if you want to compare the stability.

By the way, --fast/--rigorous options should not only change the
minimum duration of a single run to calibrate the number of loops, but
they should also change the "maximum" duration of perf.py by using
different number of runs.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue26275>
_______________________________________


More information about the Python-bugs-list mailing list