python for microcontrollers

Evil Bastard spam at me.please
Mon Aug 8 16:58:30 EDT 2005


Hi all,

I'm currently tackling the problem of implementing a python to assembler
compiler for PIC 18Fxxx microcontrollers, and thought I'd open it up
publicly for suggestions before I embed too many mistakes in the
implementation.

The easy part is getting the ast, via compiler.ast. Also easy is
generating the code, once the data models are worked out.

The hard part is mapping from the abundant high-level python reality to
the sparse 8-bit microcontroller reality.

I looked at pyastra, but it has fatal problems for my situation:
 - no backend for 18fxxx devices
 - only 8-bit ints supported

I'm presently ripping some parts from the runtime engine of a forth
compiler I wrote earlier, to add support for 8-32 bit ints, floats, and
a dual-stack environment that offers comfortable support for local
variables/function parameters, as well as support for simpler and more
compact code generation.

Python is all about implicitly and dynamically creating/destroying
arbitrarily typed objects from a heap. I've got a very compact
malloc/free, and could cook up a refcounting scheme, but using this for
core types like ints would destroy performance, on a chip that's already
struggling to do 10 mips.

The best idea I've come up with so far is to use a convention of
identifier endings to specify type, eg:
 - foo_i16 - signed 16-bit
 - foo_u32 - unsigned 32-bit
 - bar_f - 24-bit float
 - blah - if an identifier doesn't have a 'magic ending', it will
          be deemed to be signed 16-bit

also, some virtual functions uint16(), int16(), uint32(), int32(),
float() etc, which work similar to C casting and type conversion, so I
don't have to struggle with type inference at compile time.

Yes, this approach sucks. But can anyone offer any suggestions which
suck less?

-- 
Cheers
EB

--

One who is not a conservative by age 20 has no brain.
One who is not a liberal by age 40 has no heart.



More information about the Python-list mailing list