if <assignment>:

Sean McSomething ahmebah at hotmail.com
Sat Nov 30 01:37:00 EST 2002


"André Næss" <andre.hates.spam at ifi.uio.no> wrote in message
news:arqm0r$s6f$1 at maud.ifi.uio.no...
> When I started learning Python one of the things that surprised me was
that
> you couldn't do assignments inside the if clause, e.g.:
>
> if myvar = someFunction():
>
> My question is, what is the rationale for this? Is it a technical issue?
Or
> purely a matter of language design? I'm curious because I'm interested in
> the design og programming languages, not because I want this behavior
> changed in Pyton :)

To start with, Java, PHP, Perl and the like do it because C did it.  C did
it because it was intended to be a 'portable assembly language', as such
sometimes one must look back at how things work in assembly to understand
it's behavior.  My understanding of assembly is limited to a poor grasp of
x86 and some brief looks at other systems, so bear with me.

Every operation inside the CPU sets a status flag.  A status flag is like a
1-bit register that reflects the result of the last operation; the x86 has
flags (EFLAGS) for overflow, zero, negative, parity & cary-out.  All
operations set these flags, arithmetic, logical and assignment...

With this in mind, lets look at how two different ways of doing the
assignment would work.

First off:
myVar = someFunc();
if myVar
   doStuff()

would call someFunc() and then pull the result off the stack, placing it
into myVar.  It would, in another step, get the result codes for myVar (most
likely by comparing with 0; essentially it's a no-op who'se only purpose is
to run myVar through the CPU to set status flags).  Then there would be a
jump (like a goto) that jumped based on the values of a status register.

Using an assignment in the if statement, such as :
if myVar = someFunc()
    doStuff

would, again, call someFunc() and copy the result from the stack into myVar.
At this point, the result code for that value (specifically the
zero/non-zero flag) would be set and the next instruction could be a jump
checking the results of those flags, and away we go.

So, we have a minor optimization, that cuts out a single instruction.  Back
when C was designed, dropping a single instruction Really Mattered.  Today,
this behavior has no place in a high-level (taking high-level to mean
"insulating the programmer from the fact that he is in fact working on a
physical computing device) programming language, so it was not put into the
Python language design; consequentially, the Python virtual machine wasn't
designed to support this behavior meaning that the implemention of this
behavior, originally used as an optimization, would be no more efficient
than separate assignment & comparison statements and would likely be even
less efficient.


Of course, this could all be complete & utter bullshit, but it's a
reasonable extrapolation from what facts I have & my understanding of
things.





More information about the Python-list mailing list