> Due to stack pushes and pops? Or is a dictionary established for the
> parameters each time a function is invoked? Or is it something else?
> (This is useful practical information.)
First there is name lookup overhead, which will vary with how many
attributes need to be looked up ( thing.meth is two lookups ) and
how deep the search goes before success. ( instance's may have to
look in their dict, their parent classes dict, and their parent
classes ancestors dicts. )
Then, there is additional overhead in call_object to check if it
is a function, a method, or a builtin-function.
If it is a function or method written IN Python ( and this is the
BIG byte of overhead that Guido was probably referring to ) eval_code
is called with the code object and the global & local dicts.
Once a function object is called, it must build a frameobject, which
in Python is a bit more complex than just allocating space on the
stack. Frames are heap allocated objects, like everything else in
Python ( which is what inspired the who coroutine experiment! ).
Stack frames and integer objects have a special fast allocation,
but once allocated, the slots must be filled in and initialized,
( which means that reference counters may be incremented incremented )
and the argument tuple unpacked.
Look at ceval.c if you want to see the details.
- Steve Majewski (804-982-0831) <sdm7g@Virginia.EDU>
- UVA Department of Molecular Physiology and Biological Physics