[Speed] perf 0.3 released

Victor Stinner victor.stinner at gmail.com
Fri Jun 10 06:50:25 EDT 2016


Hi,

I just released perf 0.3. Major changes:

- "python -m perf" CLI now has 3 commands: show, compare and
compare_to. Compare commands says if the difference is significant (I
copied the code from perf.py)
- TextRunner is now able to spawn child processes, parse command
arguments and more
- If TextRunner detects isolated CPUs, it automatically pins the CPUs
of the worker processes to isolated CPUs
- Add ``--json-file`` command line option
- Add TextRunner.bench_sample_func() method: the sample function is
responsible to measure the elapsed time, useful for microbenchmarks
- Enhance a lot of the documentation

Writing a benchmark now only takes one line:
"perf.text_runner.TextRunner().bench_func(func)"! Full example:
---
import time
import perf.text_runner

def func():
    time.sleep(0.001)

perf.text_runner.TextRunner().bench_func(func)
---

I looked at PyPy benchmarks:
https://bitbucket.org/pypy/benchmarks

Results can also be serialized to JSON, but the serialization is only
done at the end: the final result is serialized. It's not possible to
save each run in a JSON file.

Running multiple processes is not supported neither.

With perf, the final JSON contains all data: all runs, all samples
even warmup samples.

perf now also collects metadata in each worker process. So it is more
safer to compare runs since it's possible to manually check when and
how the worker executed the benchmark. For example, the CPU affinity
is now saved in metadata.

For example, "python -m perf.timeit" now saves the setup and
statements in metadata.

With perf 0.3, TextRunner now also includes a builtin calibration to
compute the number of outter loop iteartions: repeat each sample so it
takes between 100 ms and 1 sec (min/max are configurable).

Victor


More information about the Speed mailing list