Bytecode optimisation

Corran Webster cwebster at math.tamu.edu
Tue May 18 22:09:05 EDT 1999


In article <37420CBD.FF15564 at appliedbiometrics.com>,
Christian Tismer  <tismer at appliedbiometrics.com> wrote:
>
>Corran Webster wrote:
>> 
>> Or as I mentioned in my reply to Mark-Andre Lemburg, there may be a case
>> for an optimisation level which assumes sanity for many standard things,
>> so that
>> 
>> for x in range(1000):
>>   for y in range(1000):
>>     do_stuff()
>> 
>> could assume that do_stuff() doesn't change the meaning of range and so
>> this could be optimised to:
>> 
>> r = range(1000)
>> for x in r:
>>   for y in r:
>>     do_stuff()
>
>Looks promising in the first place. But you quickly realize
>that this is no real optimization since you will do it
>by hand, anyway, or use xrange.
>And knowing if something is an xrange on every thousand's
>run through the cycle, what does it help?

Hmmm... you know how to hand optimise this, and so do I, but it takes
a while for newcomers to the language to learn the tricks.  And even a
trick like the example given will probably only shave a few percent off
the time, but it seems to me that there might be a niche for an optimiser
which can automate some of the hand optimisation which more experienced
people do as a matter of course.  And something which doesn't require
patching the Python source to do.

And maybe it'll turn out that it's not worth the effort.

I'm really doing this for fun and as a project so I can learn a bit more
about Python internals.  If it bears fruit, great!  If not, well, I'll
have learned a fair bit in the process.

>Guys, you are at the wrong end. This is another complete waste
>of time. The whole interpreter overhead is bound to an average
>of about 30 percent, when everything is C-code. I'd wonder if
>bytecode optimization would give more than 15 percent on
>average, when semantics are not changed.
>
>Do little enhancements with moderate effort.
>And check the optimization results of last year's
>conference. It's overall pretty good work but disencouraging
>results.
>
>You need to invent special new bytecodes for the things
>you want to optimize. If you can find and implement these, 
>then it makes sense to optimize with type inference.

Point taken.

At the moment I'm exploring and playing.  Maybe it'll all be for nothing,
but I'm really just discovering what's possible with Python in it's
present form.

>no-longer-optimizing-for-less-than-50-percent-ly y'rs - chris

will-be-happy-if-the-optimiser-takes-twice-as-long-to-run-as-the-time-
-it-saves-ly y'rs,
Corran





More information about the Python-list mailing list