Lookup caching

Andrea Griffini agriff at tin.it
Sat Dec 9 21:04:10 EST 2006


Hello,

I implemented that crazy idea and seems working... in its
current hacked state can still pass the test suite (exluding
the tests that don't like self generated output on stdout
from python) and the stats after the quicktest are IMO
impressing:

LOAD_GLOBAL = 13666473
globals miss = 58988
builtins = 8246184
builtins miss = 32001

LOAD_GLOBAL is the total number of times the pseudocode
instruction was executed.

globals miss is the number of time the actual lookup
on globals had to be perfomed. Note that if the lookup
wasn't done because the name was known to be absent
from globals still it's considered a cache hit, not miss.

builtins is the total number of times the builtins dict
has to be searched (because the name was not in globals)

builtin miss is the number of real builtin searches
that were performed (in other cases the lookup cache for
builtins found the answer - either positive or negative).

To me seems a promising idea, i've still to clean up
the code and to make serious speed tests (the "make test"
itself seems a lot more "fluid", but it could be just
self hypnotization :-D ).

The LOAD_GLOBAL code is actually simpler because I
resorted to a regular non-inlined lookup in case of
a cache miss. There's no reason to do that however...

Also the same approach could be used for other lookups
that get the name from co->co_names.


Andrea



More information about the Python-list mailing list