Possible inaccuracy in Python 2.4 when using profiler calibration

Brian Quinlan brian at sweetapp.com
Thu Jun 15 04:36:26 EDT 2006


I have a misinformed theory that I'd like to share with the list.

I believe that profiler calibration no longer makes sense in Python 2.4 
because C functions are tracked and they have a different call overhead 
than Python functions (and calibration is done only using Python 
functions). Here is my reasoning (in code form):

% c:\python24\python
...
>>> import profile
>>> p = profile.Profile()
>>> for i in range(5):
...     print p.calibrate(1000000)
...
4.00375499355e-006
3.95700929469e-006
3.94034406478e-006
4.00315854962e-006
3.99454335716e-006

Code
----
import profile
profile.Profile.bias = 3.90e-006 # very conservative

def bar():
     l = []
     for i in range(100000):
         l += [i]

def foo():
     l = []
     for i in range(100000):
         l.append(i) # C function that can be tracked

def baz():
     bar()
     foo()

profile.run('baz()', "out.prof")
from pstats import Stats
s = Stats('out.prof')
s.sort_stats('time', 'calls')
s.print_stats()

from timeit import Timer

t1 = Timer(
     'bar()', 'from __main__ import bar',)
print 'bar():', t1.timeit(1000) / 1000

t2 = Timer(
     'foo()', 'from __main__ import foo',)
print 'foo():', t2.timeit(1000) / 1000

Output
------

Thu Jun 15 10:22:29 2006    out.prof

          100008 function calls in -0.090 CPU seconds

    Ordered by: internal time, call count

    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
         1    0.058    0.058    0.062    0.062 cal-test.py:4(bar)
         2    0.006    0.003    0.006    0.003 :0(range)
         1    0.004    0.004   -0.090   -0.090 cal-test.py:14(baz)
         1    0.001    0.001    0.001    0.001 :0(setprofile)
         1    0.000    0.000   -0.090   -0.090 profile:0(baz())
         1    0.000    0.000   -0.090   -0.090 <string>:1(?)
         0    0.000             0.000          profile:0(profiler)
         1   -0.066   -0.066   -0.157   -0.157 cal-test.py:9(foo)
    100000   -0.094   -0.000   -0.094   -0.000 :0(append)


bar(): 0.0582713573932
foo(): 0.0370039776005

Analysis
--------

As you can see, the profiler result for "bar" is pretty reasonable but 
it is not for "foo" or "append". I believe that is because the calling 
of the C function "append" takes less time than is accounted for in the 
bias measurement (which was generated by measuring the call time of a 
Python function).

So the bias computation doesn't make sense in Python 2.4.

What do y'all think? Is this a well known fact? Should I construct a 
test to see if C function call overhead is actually less than Python 
function call overhead?

Cheers,
Brian



More information about the Python-list mailing list