new bitwise module [was Re: Discussion: new operators ...]

Huaiyu Zhu hzhu at localhost.localdomain
Tue Aug 1 22:24:53 EDT 2000


On Tue, 1 Aug 2000 19:02:11 -0400, Tim Peters <tim_one at email.msn.com> wrote:

>That nobody expressed outrage strongly suggests that the "new operators"
>thread and its offshoots have gone on at such length that only a few are
>following them anymore.  So at this point a PEP seems the only way to get
>others to even *see* the idea.

Oh, another explanation is that those who are against infix math operators
find their arguments better suited for infix bitwise operators. <0.5wink>

>I could have sworn that just two weeks ago you were passionately arguing
>that named functions are intolerable for *your* field of interest <0.7
>wink>.  So be sure to explain why they're better than infix operators for
>those who like slinging ints.

Well, the benefit does not show much for those who like slinging ints.  The
overall benefit to all others, including though who sling ints occasionally,
comes from the elimination of a large proportion of builtin operator syntax
(complete with special symbols and precedence rules).

So why infix operators for math but not for bitwise operation?  A few
reasons come to mind right away:

- The math symbols are familiar.  One rule explains all what's new (adding ~
  to op to switch between elementwise and objectwise).  The bitwise
  operators employ many unique symbols that most people need to look up docs
  (and finding 1^2 unexpected).

- The precedence remain unchanged for math operators.  The bitwise operators
  have their own precedence that most users don't remember.

- The new math operator do not introduce any new semantics for buildin
  types.  So 1+2 == 1~+2.  But that's different from 1&2.  Semantics of each
  bitwise operator is defined in the core, instead of in a module.

- When standing alone, op(a, b) and (a op b) do not differ very much.  Infix
  operators become necessity only when both operands are sort of peers and
  the resulting (a op b) are often plugged in composite formulas.  Math
  operators tend to be used this way more often than bitwise operations.

- Matrices tend to be used more in oo style programming (replacing lists of
  numbers and for loops, etc), while bitwise operations tend to be
  encapsulated in oo style programming.

I suppose these could go into the PEP. 

To me, the acceptance criteria for extra syntax would include "learn once,
remember for life".  I just learned the precedence of bitwise operators
three times in the past few days, and I'm not sure I can remember them now.

Imagine someone learns Python in a crash course in one day, where both
bitwise and matrix operations are explained with examples. A week later he
needs to use them, and he understands the semantics of both. Can he use
either without looking up the docs for syntax?

>I happen to sling more ints than matrices
>myself, and find no appeal in, e.g., needing to write bitleft(x, 1) instead
>of x << 1.  And after augmented assignments are in, I would strongly prefer
>writing
>
>    a[i] <<= 1
>
>to
>
>    a[i] = bitleft(a[i], 1)

So how does one change an immutable object in place?  The augassign patch
changes reference instead.  So this wouldn't work

def bitleft_ab(a,b): a<<=b
bitleft_ab(a[i], 1)

Maybe a bitwise class would help? 

a[i].left(1)

Or we need to wait for type-class unification?  Otherwise it seems to me no
completely consistent treatment is possible for these operations.

Huaiyu



More information about the Python-list mailing list