benchmark

Kris Kennaway kris at FreeBSD.org
Sun Aug 10 13:07:22 EDT 2008


Angel Gutierrez wrote:
> Steven D'Aprano wrote:
> 
>> On Thu, 07 Aug 2008 00:44:14 -0700, alex23 wrote:
>>
>>> Steven D'Aprano wrote:
>>>> In other words, about 20% of the time he measures is the time taken to
>>>> print junk to the screen.
>>> Which makes his claim that "all the console outputs have been removed so
>>> that the benchmarking activity is not interfered with by the IO
>>> overheads" somewhat confusing...he didn't notice the output? Wrote it
>>> off as a weird Python side-effect?
>> Wait... I've just remembered, and a quick test confirms... Python only
>> prints bare objects if you are running in a interactive shell. Otherwise
>> output of bare objects is suppressed unless you explicitly call print.
>>
>> Okay, I guess he is forgiven. False alarm, my bad.
>>
>>
> Well.. there must be somthing because this is what I got in a normal script
> execution:
> 
> [angel at jaulat test]$ python iter.py
> Time per iteration = 357.467989922 microseconds
> [angel at jaulat test]$ vim iter.py
> [angel at jaulat test]$ python iter2.py
> Time per iteration = 320.306909084 microseconds
> [angel at jaulat test]$ vim iter2.py
> [angel at jaulat test]$ python iter2.py
> Time per iteration = 312.917997837 microseconds

What is the standard deviation on those numbers?  What is the confidence 
level that they are distinct?  In a thread complaining about poor 
benchmarking it's disappointing to see crappy test methodology being 
used to try and demonstrate flaws in the test.

Kris




More information about the Python-list mailing list