[Speed] CPU speed of one core changes for unknown reason

Victor Stinner victor.stinner at gmail.com
Wed May 18 19:39:57 EDT 2016


2016-05-18 23:05 GMT+02:00 Paul Graydon <paul at paulgraydon.co.uk>:
> Bear in mind that what you see by way of CPU Speed is based on *sampling*, and the CPU can be switched speeds very
> quickly.  Far faster than you'd necessarily see in your periodic updates.  Also note that if your cooling isn't up to
> scratch for handling the CPU running permanently at its top normal speed, thermal throttling will cause the system to
> slow down independently of anything happening OS side.  That's embedded within the chip and can't be disabled.

I checked the temperature of my CPU cores using the "sensors" command
and it was somewhere around ~50°C which doesn't seem "too hot" to me.
A better bet is that I was close the temperature switching between
Turbo Mode or not.

I disabled Turbo Mode and Hyperthreading on my CPU and I didn't
reproduce the random slowdown anymore.

I also misunderstood how Turbo Mode works. By default, a CPU uses the
Turbo Mode, but disables it automatically if the CPU is too hot. I
expected that the CPU doesn't use Turbo Mode, but start to use it
after a few seconds if the CPU usage is high.

It looks like the performance also depends on the number of cores
currently used:
https://en.wikipedia.org/wiki/Intel_Turbo_Boost#Example


> FWIW microbenchmarks are inherently unstable and susceptible to jitter on the system side.

Using CPU isolation helps a lot to reduce the noise coming from the "system".


> If microbenchmarking really is the right thing for your needs, (...)

Someone asked me to check the perfomance of my patches using perf.py,
so I'm using it. The accuracy of some specific benchmark of this
benchmark suite is still an open question ;-)


> ... you should look at running enough runs to be able to get a fair idea of realistic performance.

Right, this idea was already discussed in other threads and already
implemented in the PyPy flavor of perf.py. I also patched locally my
perf.py to do that.


> I do have some concerns that you're increasingly creating a synthetic environment to benchmark against, and that you're
> at risk of optimising towards an environment the code won't actually run in, and might even end up pursuing the wrong
> optimisations.

Yeah, that's an excellent remark :-) It's not the first time that I
read it. I think that it's ok to use CPU isolation and tune CPU
options (ex: disable Turbo Mode) to reduce the noise. Other parameters
like disabling hash randomization or disabling ASLR is more an open
question.

It seems to me that disabling randomization (hash function, ASLR)
introduces a risk of leading to the invalidate conclusion (patch makes
Python faster / slower). But I read this advice many times, and
perf.py currently explicitly disables hash randomization.

The most common trend in benchmarking is to disable all sources of
noice and only care of the minimum (smallest timing). In my experience
(of last weeks), it just doesn't work, at least for microbenchmarks.

Victor


More information about the Speed mailing list