Profiling the interpreter from within

gb at cs.unc.edu gb at cs.unc.edu
Fri Oct 5 08:53:11 EDT 2001


Courageous <jkraska at san.rr.com> writes:

> 
> While this isn't a bad idea, the interpreter folks don't really need
> to see it profiled. I profiled it in _Quantify_ a while back and
> decided that there was little that could be done in terms of
> obvious optimizations to the Python interpreter code.

I wouldn't use this imagined profiling capability to tune the
interpreter but rather to evaluate my use of it and the performance of
my extensions. For example, I could imagine certain programs spending
huge amounts of time in malloc and free if the program created and
destroyed many small objects. This might inspire me to modify the
memory allocation in Numeric (or whatever) to cache small blocks
instead of freeing them. I have seen large performance improvements in
other matrix packages by applying this simple optimization.

But... I'd like to have an idea of how much is there to be gained
before going to the trouble. Thus the desire for an easy to use
profiler that sees everything.

gb



More information about the Python-list mailing list