interpreter vs. compiled

castironpi castironpi at gmail.com
Tue Aug 5 13:12:22 EDT 2008


On Aug 5, 9:21 am, "paulo.jpi... at gmail.com" <paulo.jpi... at gmail.com>
wrote:
> On Aug 3, 1:26 am, castironpi <castiro... at gmail.com> wrote:
>
>
> > Which is 12 bytes long and runs in a millisecond.  What it does is set
> > a memory address to successive integers 0..9, then yields.  Due to the
> > nature of program flow control, while it runs its first steps on any
> > x86 machine, the yield only succeeds if on Windows 98+, and crashes
> > the machine, or otherwise loses control if not.  (That part depends on
> > those OSses.)
>
> > I can try something similar dynamically.
>
> > char* mem= alloc( 48 )
> > setpermission( mem, EXECUTE )
> > memcpy( mem+ 0, "\x09\x0f\x00\x00", 4 )
> > for( int x= 0; x< 10; ++x ) {
> >    memcpy( mem+ 4* (x+ 1 ), '\x04\xA0\x00\x00', 4 )
> >    mem[ 4* (x+ 1 )+ 3 ]= (char) x
> > memcpy( mem+ 44, '\x01\x20\x00\x01', 4 )
> > setjump
> > goto mem
>
> > Which with some imagination produces the contents of 'abinary.exe'
> > above (one difference, last word) in a memory block, at address 'mem',
> > then jumps to it, which then jumps back, and then exits. </fiction>
>
> > I'll compare a C complation to the first example, 'abinary.exe', and a
> > JIT compilation to the second example, 'char* mem'.  If the comparison
> > isn't accurate, say how, because these are places I can start from...
> > (yes, that is, instead of just repeating the claims).
>
> > When does a JIT do this, and what does it do in the meantime?
>
> The JIT works like an assembler/linker that writes to memory. It will
> load the
> file(s) containing the bytecode and generate the required assembly
> instructions into
> memory.
>
> In the case there are dependencies to other modules, they will be
> loaded as well, and
> compiled. Then the linker will take care that cross references between
> modules are correct,
> like memory addresses and branch targets.

So far this is the same as any compilation, except the first half is
done, and the output location, which is not any bottleneck.

> A clever JIT might add instrumentation points, so that it can rewrite
> the code using profile
> guided optimizations, this means generating optimized code using as
> input the program behaviour.
>
> This makes JIT code usually faster than normal compiled code.

Here you need an example.  You are suggesting that a compiler can make
better optimizations if it knows what functions are going to carry
what loads, run how many times, etc., and it can use profile
statistics as a partial indicator to do that.

> Although
> normal native code is
> able to start executing faster, it only targets a specific set of
> processors.
>
> JIT code is independent of the processor, and a good JIT
> implementation is able to explore the
> processor better than a direct native compiler. There is however the
> time penalty on program
> startup.

Once again, you are asserting that knowing what the program has done
so far, say in the first 5 seconds ( or .5 ), can improve
performance.  In this case it can make better use of what instructions
to use on the CPU.  I need an example.



More information about the Python-list mailing list