Bytecode optimisation

Michael Hudson mwh21 at cam.ac.uk
Wed May 19 04:24:15 EDT 1999


Christian Tismer <tismer at appliedbiometrics.com> writes:
> Guys, you are at the wrong end. This is another complete waste
> of time. The whole interpreter overhead is bound to an average
> of about 30 percent, when everything is C-code. I'd wonder if
> bytecode optimization would give more than 15 percent on
> average, when semantics are not changed.

Believe me, I know this - that's what my original "if-you-get-better-
than-twenty-percent-speed-up-I'll-be-impressed" comment was driving
at, albeit in an obscure fashion...

> Do little enhancements with moderate effort.
> And check the optimization results of last year's
> conference. It's overall pretty good work but disencouraging
> results.
> 
> You need to invent special new bytecodes for the things
> you want to optimize. If you can find and implement these, 
> then it makes sense to optimize with type inference.

What would really really speed up Python (at least for the things that
I feel the need to optimise) is unboxing primitive datatypes.

As someone has pointed out on this newsgroup (probably some tim or
other), executing (for integer i)

i = i + 1

occupies far more processing time twiddling refcounts than adding 1 to
the value of i.

To do this probably requires some kind of static typing, and that's
ground that's been trod before.

You'd probably need opcodes like BINARY_ADD_INTEGERS,
BINARY_SUB_FLOATS, ... and you'd run out of bytes pretty quickly,
methinks.

This is most certainly not a trivial issue.

> no-longer-optimizing-for-less-than-50-percent-ly y'rs - chris

Fair enough.

Michael




More information about the Python-list mailing list