pointless musings on performance

Paul Boddie paul at boddie.org.uk
Thu Nov 26 06:12:18 EST 2009


On 25 Nov, 13:11, Antoine Pitrou <solip... at pitrou.net> wrote:
>
> When you say "executing each kind of bytecode instruction", are you
> talking about the overhead of bytecode dispatch and operand gathering, or
> the total cost including doing the useful work?

Strip away any overhead (dispatch, operand gathering) and just measure
the cumulative time spent doing the actual work for each kind of
instruction, then calculate the average "cost" by dividing by the
frequency of each instruction type. So, for a whole program you'd get
a table of results like this:

LOAD_CONST <total time> <frequency> <time per instruction>
LOAD_NAME <total time> <frequency> <time per instruction>
CALL_FUNCTION <total time> <frequency> <time per instruction>
...

A comparison of the "time per instruction" column would yield the
relative cost of each kind of instruction. Of course, a general
profiling of the interpreter would be useful, too, but I imagine that
this has been done many times before. To go back to the CISC vs. RISC
analogy, I'd expect substantial variation in relative costs, which one
could argue is a CISC-like trait (although a separate matter to
instruction set orthogonality).

Paul



More information about the Python-list mailing list