[Python-checkins] r46146 - sandbox/trunk/rjsh-pybench/pybench.py

M.-A. Lemburg mal at egenix.com
Wed May 24 13:11:43 CEST 2006


Steve Holden wrote:
> M.-A. Lemburg wrote:
>> Steve Holden wrote:
> [...]
>>>> This would be more in line with what time.clock() returns
>>>> on Unix platforms, namely the process time... I wonder why
>>>> time.clock() doesn't use this API on Windows.
>>>>
>>> That's a pending change here thanks to Kristjan V Jonsson of CCP, who
>>> brought the same code to my attention yesterday. It shouldn't be a
>>> difficult fix, so I'll see how it affects reliability. On Windows I
>>> can't imnagine it will hurt ...
>>
>> I gave it a go, but the results are not very promising (or
>> I did something wrong). The UserTime value does change, but
>> it seems to measure something different than process time, e.g.
>> if you run "while 1: pass" for a while, the value doesn't
>> change.
>>
>> I've also had a look at the implementation of time.time()
>> vs. time.clock(): time.clock() is definitely more accurate
>> on Windows since it uses the high resolution performance
>> counter:
>>
>> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winui/winui/windowsuserinterface/windowing/timers/timerreference/timerfunctions/queryperformancecounter.asp
>>
>> This is about at accurate as it'll get on Windows.
>>
>> time.time() uses ftime() which is only accurate to the
>> millisecond (if at all):
>>
>> http://msdn2.microsoft.com/en-us/library/z54t9z5f(VS.80).aspx
>>
> The sandbox copy already uses the same code timeit.py uses to determine 
> the clock to use. 

Sorry, but that approach is just plain wrong:

if sys.platform == "win32":
    # On Windows, the best timer is time.clock()
    default_timer = time.clock
else:
    # On most other platforms the best timer is time.time()
    default_timer = time.time

You never want to use a wall clock timer to measure
the speed of code execution. There are simply too many
other things going on in a multi-process system to make
this a reliable approach.

On Unix time.clock() will give you process time which
is far more reliable as it provides feedback on the
time the process actually received from the kernel
for execution.

Try it:

print time.clock(); time.sleep(10); print time.clock()

On Unix, you'll get the same results for both prints.
On Windows, you get wall clock results, ie. a 10 second
difference.

> Uncle Timmy points out that the GetProcessTimes() 
> isn't ubiquitous, so I don't think there's much point in trying to make 
> use of it.

According to MSDN it's only available on the WinNT branch
of the Windows kernel. Since we're trying to optimize for
a current system, it should be available on all systems
where you'd normally be running pybench to check whether
an optimization makes sense or not.

> The one change I still do want to make is to allow the use of some 
> bogomips-like feature to provide scaled testing to ensure that the 
> individual test times can more easily be made large enough.

Isn't that what the warp factor already implements ?

> At that point I'll probably be looking to check it back into the trunk 
> as version 1.4 - unless you have any further suggestions? Indications 
> are that the timings do seem to be more reliable.

I disagree on the clock approach you implemented and the
change to use a 0 default for calibration runs, so those
changes should not go into the trunk.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, May 24 2006)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! ::::


More information about the Python-checkins mailing list