Python Compiler

Josh Tompkins josht at iname.com
Mon May 1 18:43:30 EDT 2000


[snip]
>
>There are two main approaches to compile Python to machine code:
>
>A) Work through the bytecode as the interpreter does, compiling each
>bytecode instruction to the library function that the interpreter would
>call. For example, BINARY_ADD would become a PyObject_Add() call.
>
>JPython uses a similar technique to compile to Java bytecode. However,
>because all variables are completely polymorphic (i.e., nothing is  known
>about their type), even the simplest operations end up going through the
>abstraction mechanism. So that BINARY_ADD, for example,  might still have
>to go through and allocate a new integer object, deallocate the old ones,
>etc, even if a simple machine "ADD 1 TO REGISTER" instruction would work.
>
>The result is that the program is in machine code, but it still runs like
>it's in an interpreter. Cutting out the fetch-decode-dispatch sequence is
>really only the tip of the iceberg.

Would this type of compiler result in any kind of speed increase?  Even 
though it runs like it's in an interpreter, does the translation to pure 
machine code increase speed?

[snip]

Thanks for the replys, guys.

Josue
________________________________________________________________

"Destined For Great Things -- but pacing myself."
- From a t-shirt.

E-Mail:  josht at crosswinds.net
ICQ:  21219667
AIM:  JosueTheGreat
Web:  http://www.crosswinds.net/~josht
_________________________________________________________________



More information about the Python-list mailing list