I profiled Python (using Unix gprof) and found that tons of time was
spent in a C function called finddfa(). A little checking revealed
that this is a slow function that is mostly called when compiling
python code. I *thought* that all my code was being compiled *once*,
and was very surprised by this time sink. A bit of looking around on
my Unix box revealed that the permissions in my /usr/local/lib/python
directory for a number of *.pyc files were incorrectly set. The
result was that for these files, a replacement *.pyc file could *not*
be written, and the actual *.py file was being compiled time and
As a short term fix I made the *.pyc files in this central directory
more universally writable, and the startup time on my python code
dropped dramatically (I shaved .5 seconds of wall time from each of
my Python processes). Leaving these files universally writable seems
to open a trojan horse security loophole.
I *think* that the correct long term fix is to *only* have the *.pyc
files available in this central directory. This will guarantee that
pre-compiled Python code will be used, and will close the trojan horse
door. It is useful to have the source code for these common modules
available for perusal, but placing them in the ../lib/python directory
seems to be too problematic.
In summary, "proper installation" can *greatly* improve Python
p.s., If non-compiled performance enhancement is desired, someone
might want to look at the finddfa() program. It gets called an awful
lot, and it does a *linear* search in a list. A faster search
algorithm (using hashing, skip lists, various trees, etc) would be
most beneficial (at least on our SparcServer 1000).
p.p.s., Profiling is wonderful :-).