[Python-Dev] Re: opcode performance measurements

Jeff Epler jepler@unpythonic.dhs.org
Thu, 31 Jan 2002 13:48:03 -0600


On Thu, Jan 31, 2002 at 12:37:16PM -0600, Skip Montanaro wrote:
> Interesting results.  I've been working on my {TRACK,UNTRACK}_GLOBAL opcode
> implementations.  I have an optimizer filter that sets up tracking for all
> LOAD_GLOBAL,{LOAD_ATTR}* combinations.  It's still not quite working and
> will only be a proof of concept by devday if I do get it working, but I
> expect most of these expensive opcode combinations to collapse into a
> LOAD_FAST, with the addition of a TRACK_GLOBAL/UNTRACK_GLOBAL pair executed
> at function start and end, respectively.

Won't there be code that this slows down?  For instance, the code
generated by
    print "f = lambda: 0"
    print "def g():"
    print "\tif f():"  # prevent optimization of 'if 0:'
    print "\t\tx = []"
    for i in range(10000):
	print "\t\tx.append(global_%d)" % i
    print "\t\treturn x"
    print "\treturn []"
(10001 TRACK_GLOBALs, one LOAD_GLOBAL)

not to mention, will it even work?  TRACK_GLOBAL will have to make
special note of globals that didn't exist yet when the function prologue
is executed, and either not subsequently execute the load as a LOAD_FAST
or else have a special value that causes the same NameError "global name
'global_666' is not defined" message, not an UnboundLocalError...

The latter sounds easy enough to solve, but how do you make sure that
this optimization is never a pessimization (aside from sending
programmers such as myself to the retraining camps of the PSU)?

Jeff
PS Hey, that's remarkable .. usually people get unexpectedly cut off
when they try to mentio