[Speed] Tool to run Python microbenchmarks

Maciej Fijalkowski fijall at gmail.com
Fri Feb 12 10:42:08 EST 2016


Hi Victor

timeit does two really terrible things - uses min(time) and disables
the garbage collector, which makes it completely unreliable.

On Thu, Feb 11, 2016 at 11:39 PM, Victor Stinner
<victor.stinner at gmail.com> wrote:
> Hi,
>
> To run "micro"-benchmarks on "micro"-optimizations, I started to use
> timeit, but in my experience timeit it far from reliable.
>
> When I say micro: I'm talking about a test which takes less than 1000
> ns, sometimes even a few nanoseconds!
>
> You always have to run the same micro-benchmark when timeit *at least*
> 5 times to find the "real" "minimum" runtime.
>
> That's why I wrote my own tool to run microbenchmarks:
> https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py
>
> Yury suggested me to add this tool to the Python benchmark project.
> I'm ok with that, but only if we rename it to "microbench.py" :-) I
> wrote this tool to compare micro-optimizations with a long list of
> very simple tests. The result is written into a file. Then you can
> compare two files and compare more files, and maybe even compare
> multiple files to a "reference". It "hides" difference smaller than 5%
> to ignore the noise.
>
> The main feature is benchmark.py is that it calibrates the benchmark
> using time to choose the number of runs and number of loops. I
> proposed a similar idea for perf.py:
> https://bugs.python.org/issue26275
>
> What do you think? Would this tool be useful?
>
> Victor
> _______________________________________________
> Speed mailing list
> Speed at python.org
> https://mail.python.org/mailman/listinfo/speed


More information about the Speed mailing list