[Speed] New benchmark suite for Python

Victor Stinner victor.stinner at gmail.com
Thu Aug 18 12:04:02 EDT 2016


2016-08-18 3:17 GMT+02:00 Victor Stinner <victor.stinner at gmail.com>:
> I made a quick test on PyPy: (...)

I ran a longer test last night. Some benchmarks were slower on PyPy.
In fact, the benchmark doesn't give enough time to PyPy to warm up its
JIT.

I just released perf 0.7.4 which has a better calibration code for
Python using a JIT (PyPy), better default configuration values:
   Default (with a JIT, ex: PyPy): 6 processes, 10 samples per process
(total: 60 samples), and 10 warmups.
vs
   Default (no JIT, ex: CPython): 20 processes, 3 samples per process
(total: 60 samples), and 1 warmup.

perf 0.7.4 also has new helper functions: python_implementation() and
python_has_jit().

I started to patch benchmarks to increase the number of warmups (even
more) of some benchmarks:

* go: 50 warmups
* hexiom2: 15 warmups
* tornado http: 30 warmups

If perf detects a JIT (PyPy), the warmup step now computes more
samples dynamically if it detects that a raw sample is smaller than
the minimum time (100 ms).

All these changes were written to help PyPy to warm up its JIT. I'm
not sure that it's fully correct, it may make benchmarks less reliable
since the number of warmup samples is no more constant. Maybe the code
should be enhanced even more to at least use the same parameters in
all worker processes.

Victor


More information about the Speed mailing list