optimization

Steven D'Aprano steve at REMOVE-THIS-cybersource.com.au
Tue Dec 2 06:06:08 EST 2008


On Mon, 01 Dec 2008 19:06:24 -0600, Robert Kern wrote:
 
> As Neal has observed, there is a performance hit for creating functions
> inside of another function. 

True, but it's not a big hit, and I believe it is constant time 
regardless of the size of the function. The inner function has been 
(mostly) pre-compiled into bits, and assembling the bits is quite fast.

On my desktop, I measure the cost of assembling the inner function to be 
around the same as two function lookups and calls. 

>>> def outer():
...     def inner():
...             return None
...     return inner()
...
>>> def ginner():
...     return None
...
>>> def gouter():
...     return ginner()
...
>>> from timeit import Timer
>>> t1 = Timer('outer()', 'from __main__ import outer')
>>> t2 = Timer('gouter()', 'from __main__ import gouter, ginner')
>>> t1.repeat()
[1.782930850982666, 0.96469783782958984, 0.96496009826660156]
>>> t2.repeat()
[1.362678050994873, 0.77759003639221191, 0.58583498001098633]


Not very expensive.



> Every time you go through the outer
> function, you are creating new function objects for all of the inner
> functions. That's how you can get lexical scoping. It is not equivalent
> to defining the functions all at the top-level, where all of the
> function objects are created at once. The compiler can't optimize that
> overhead away because the overhead arises from implementing a real
> feature.

But the time it takes to parse the function and compile it to code is 
optimized away.

>>> outer.func_code.co_consts
(None, <code object inner at 0xb7e80650, file "<stdin>", line 2>)

Which makes me wonder, is there anything we can do with that code object 
from Python code? I can disassemble it:

>>> import dis
>>> dis.dis(outer.func_code.co_consts[1])
  3           0 LOAD_CONST               0 (None)
              3 RETURN_VALUE

Anything else?



-- 
Steven



More information about the Python-list mailing list