Future division patch available (PEP 238)

Stephen Horne steve at lurking.demon.co.uk
Sun Jul 22 04:35:27 EDT 2001


On Sun, 22 Jul 2001 00:36:38 -0400, Guido van Rossum
<guido at digicool.com> wrote:

>It works as follows:
>
>- unconditionally, there's a new operator // that will always do int
>  division (and an in-place companion //=).
>
>- by default, / is unchanged (and so is /=).
>
>- after "from __future__ import division", / is changed to return a
>  float result from int or long operands (and so is /=).
>
>Read the patch description for more details; the implementation of int
>and float division are semi-lame.

Oh dear. I'm sure your tired of arguments about the division operator,
but I never realised this PEP existed and I suddenly feel I have to
state my opinion.

First something at least a little constructive...

Assuming that this change is inevitable, and given that '/' is used
quite a lot for both integer and float division - and has been for a
long time - surely, to avoid breaking code, the '//' should always
return a float. That way, the basic '/' will continue to return what
it always did and will do an integer division if the arguments are
integers. '//' and '//=' should do the new thing - returning floats
even with integer arguments. Having a division operator that returns
an integer even with float arguments seems particularly odd, so
there's probably no need to worry about providing such an operator.

As an additional C-style argument, integers are more common than
floats and / is faster to type than // ;-)


However, is the change really inevitable now? I'm afraid the whole
principle is something I disagree with strongly.

In Pascal, having separate '/' and 'div' operators was such a pain
that when Ada - a heavily Pascal-influenced language - was defined, an
effort was made to move away from it. And remember, Ada is generally
used by cheap relatively inexperienced programmers because thats all a
military contract pays for (yes, I was once an Ada programmer ;-), and
it was specifically designed to avoid confusion, to be readable and
maintainable, and to *not* be error-prone.

And as it happens, I remember my first experience with Pascal led to
the question "why does Pascal need two division operators when BASIC
works perfectly well with one?" - BASIC being another language
originally designed for newbie programmers, and one which has probably
seen more newbie programmers than all other languages put together if
you consider the days of 8-bit and 16-bit micros.

Some might claim the 3 divided by 2 should give the result 1.5 - I
disagree. 3 divided by 2 is 1, with a remainder equal to 1 - something
that is already supported perfectly well. I seem to remember learning
that *before* learning about fractions and decimals, and *long* before
I ever saw a computer. Division in the set of integers can only have
an integer result by definition. The fact that in mathematics it is
common to assume real numbers even if no point is specified is largely
irrelevant - programmers often have to be more explicit than
mathemeticians because programs are read by computers - not people.


Certainly many people will disagree with me, but this simply brings up
an additional point - this is clearly a matter of taste more than
principle. In matters purely of taste, the tradition has been to
remain consistent with the past. And with good reason - pleasing 50%
of the users at the expense of annoying the other 50% is no net gain,
despite the effort required and the risk of breaking existing code.
Why the sudden change?


Sorry that this is clearly so late, but I've been too caught up in the
exciting new world of iterators and generators to notice this piece of
ancient and once gratefuly discarded history sneaking back into the
present. Now I've seen it, I *must* have my temper tantrum ;-)


yours in-terror-of-the-broken-code




More information about the Python-list mailing list