[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations
Brett Cannon
report at bugs.python.org
Thu Feb 4 12:47:37 EST 2016
Brett Cannon added the comment:
What would happen if we shifted to counting the number of executions within a set amount of time instead of how fast a single execution occurred? I believe some JavaScript benchmarks started to do this about a decade ago when they realized CPUs were getting so fast that older benchmarks were completing too quickly to be reliably measured. This also would allow one to have a very strong notion of how long a benchmark run would take based on the number of iterations and what time length bucket a benchmark was placed in (i.e., for microbenchmarks we could say a second while for longer running benchmarks we can increase that threshold). And it won't hurt benchmark comparisons since we have always done relative comparisons rather than absolute ones.
----------
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue26275>
_______________________________________
More information about the Python-bugs-list
mailing list