Writing an emulator in python - implementation questions (for performance)

Santiago Romero sromero at gmail.com
Fri Nov 13 06:20:41 EST 2009


 I'm going to quote all the answers in a single post, if you all don't
mind:

> [greg]
> But keep in mind that named "constants" at the module level
> are really global variables, and therefore incur a dictionary
> lookup every time they're used.
>
> For maximum speed, nothing beats writing the numeric literals
> directly into the code, unfortunately.

 So, finally, there are not constants available in python? (Like:)

#define  R1   1


> Generally, I think you're going to have quite a battle on
> your hands to get a pure Python implementation to run as
> fast as a real Z80, if it's even possible at all.

 Well... I'm trying to emulate the basic Z80, clocked at 3.5Mhz..
I hope python in a 2Ghz computer can emulate a 3.5Mhz machine ...

 Finally, if it's not possible... well, then I would just
have some fun... :-)



> [Steven D'Aprano]
> The shift and mask are a little faster on my machine, but that's
> certainly what I would call a micro-optimization. Unless the divmod call
> is the bottleneck in your code -- and it almost certainly won't be --

 It can be a real bottleneck.

 An emulator executes continously machine code instructions.
Those machine code instructions are read from memory, and operands are
read from memory too. The minimum opcode (00h -> NOP) requires 1
memory read, and the CPU task is read from mem & decode & execute.

 My problem is that I would need to "encapsulate" Memory reads and
Memory Writes in functions. In C I use #define so that:

 - No function call is done (¿code unrolling?)
 - I can "call" my function so I don't have to manually repeat my code
 - I can optimize my "#define" macro just once and all the "calls" are
   optimized too.

 This way (in C) I can write readable code and the compiler replaces
my
"readable macros" with the final code.

> I don't think it's worth the obfuscation to use shift/mask.

 An idea.

 I think I'm going to write a layer previous to my python program, to
allow to use macros in my emulator, and generate the final .py program
with a script.

 VERY SIMPLE EXAMPLE:

My program:

File emulator.pym:

==================================
#!/usr/bin/python
import blah
import sys

MACRO BEGIN Z80WriteMem( address, value )
  blablablah
  inc blah
  p = p + x
MACRO END

MACRO BEGIN Z80ReadMem( address )
  ( memory[address>>4][blah] )
MACRO END


(more code)

    pepe = @@@Z80ReadMem( reg_A )
    (more code)
    @@@Z80WriteMem( 0x121212, value )
==================================

 And then, use a script to replace macro calls @@@ by the
final code.

 This way I could write my emulator with "macros" and
my "preprocessor" would rewrite the final .py files
for the "binary" releases. While I can keep the .pym
files as the "real" source (because .py files would be
generated from .pym macro-files).

 Can the above be easily done with another already-existing
application? (example: can m4 do this job)?



More information about the Python-list mailing list