Python vs. Lisp -- please explain

Steven D'Aprano steve at REMOVETHIScyber.com.au
Tue Feb 21 17:16:00 EST 2006


On Tue, 21 Feb 2006 09:46:27 -0800, Donn Cave wrote:

> In article <43FAD2C2.7080908 at REMOVEMEcyber.com.au>,
>  Steven D'Aprano <steve at REMOVEMEcyber.com.au> wrote:
> ...
>> Hey Donn, here is a compiled program for the PowerPC, 
>> or an ARM processor, or one of IBM's Big Iron 
>> mainframes. Or even a Commodore 64. What do you think 
>> the chances are that you can execute it on your 
>> x86-compatible PC? It's compiled, it should just 
>> work!!! Right?
>> 
>> No of course not. If your CPU can't interpret the 
>> machine code correctly, the fact that the code is 
>> compiled makes NO difference at all.

[snip for brevity]

> Sure, all this is true, except for the term "interpreter."
> You would surely not use the word that way, unless you
> just didn't want to communicate.

Do you honestly believe that the CPU doesn't have to interpret the machine
code, or are you just deliberately playing silly buggers with language?

In modern CPUs, there is an intermediate layer of micro-code between the
machine code your C compiler generates and the actual instructions
executed in hardware. But even if we limit ourselves to obsolete hardware
without micro-code, I ask you think about what an interpreter does, and
what the CPU does, in the most general way possible.

Both take a stream of instructions. Both have to take each instruction,
and execute it. In both cases the link between the instruction and the
effect is indirect: for example, the machine code 00000101 on the 
Zilog Z80 processor causes the CPU to decrement the B processor register.
In assembly language this would be written as DEC B. There is absolutely
nothing fundamental about the byte value 5 that inherently means
"decrement B processor register".

In other words, machine language is a language, just like it says, and
like all languages, it must be interpreted.

> Your paragraph above that starts with "No of course not",
> even omits a point that everyone understands, you can in
> fact expect a .py file will work independent of machine
> architecture - like any interpreted language.

Amazing. In your previous post you were telling everybody how the
*disadvantage* of interpreted programs is that they won't run unless the
interpreter is present, and in this post you are telling us that
interpreted languages will just work. What happened to the requirement for
an interpreter?

Let's see you run that Python program on a Zilog Z80 without a Python
interpreter. Can't be done. No interpreter, whether in hardware or
software, and the program won't run, whether in source code or byte code
or machine code.

If I allow that the machines have an interpreter, perhaps you'll return
the favour and install an interpreter for machine language (often called
an emulator). Now your compiled C or Lisp code also will run independent
of machine architecture.

In order to force "interpreted language" and "compiled language" into two
distinct categories, rather than just two overlapping extremes of a single
unified category, you have to ignore reality. You ignore interpreted
languages that are compiled, you ignore the reality of how machine code is
used in the CPU, you ignore the existence of emulators, and you ignore
virtual machines.


> We all know
> what native code compilation buys you and what it doesn't.

Did you fail to learn *anything* from my parable of interpreted Lisp on a
Macintosh II running faster than compiled Lisp running on a Mac II fitted
with a Lisp processor?


-- 
Steven




More information about the Python-list mailing list