Changing the Division Operator -- PEP 238, rev 1.12

Mark Day mday at apple.com
Mon Jul 30 14:13:58 EDT 2001


In article <mailman.996289972.15493.python-list at python.org>, Arthur
Siegel <ajs at ix.netcom.com> wrote:

> Would be a lot more compelling were they reports *from* rather than
> reports *about*.

Agreed.  But what struck me was that when a non-programmer came to a
Python discussion list asking whether Python was suitable for his
project, several people warning him about the pitfalls of division (and
not much else).

I don't recall whether the experienced programmers explicitly said they
had fallen into the trap.  Considering how many of them jumped right to
the subject of division, it seems plausible that they had experienced
it themselves.  And something made enough of an impression on them to
make the issue the first one they mentioned.

I suppose it could be the computer equivalent of hypochondria -- people
hear about an obscure bug and it suddenly becomes The One True Bug even
though neither they, their friends, family, etc. actually encounter the
bug.  But the comments sounded like they came more from experience than
fear.

> Are you compelled? Why?

Because it helped me understand the motivation for the PEP.  It helped
convince me that there was indeed a problem, and that it was worthy of
a fix.

My current work and leisure programming almost never uses floating
point.  I primarily use C, a statically typed language; when a routine
expects a float as input, it gets a float, or the program doesn't
compile.  And when I last did non-trivial numeric work, it was with a
version of BASIC that only supported floats; what looked like integers
were actually floats that just happened to have all zeroes after the
decimal point.  So, I've never really had a problem where integer
division happened when I was expecting float division.

The current division operator is perfectly understandable to me.  You
give it integer inputs and you get an integer output.  You feed it
float inputs and get a float output.  I know what I'm giving it, so I
know what it will produce.  It's really two different operators (float
division and int division) that happen to have the same name.  It works
the same way in C (though the compiler knows at compile time whether it
will be float or int division).  Given my background, the behavior was
what I expected, so I didn't think it was broken.

Those other threads reminded me that Python is approachable.  People
will use it to help solve a problem in their area of expertise.  Those
people may not be as aware of the distinction between ints and floats. 
And since Python is so easy to use interactively, I can see someone
typing in an integer literal and it getting to a routine that assumed a
float.  Python would behave differently than a typical pocket
calculator (where everything is a float).

If I write a routine that expects floats, and I don't have control over
what calls that routine, I'll probably check the inputs or explicitly
convert them to floats.  Would a non-programmer do that?  Maybe
(probably?) not.  They may (probably?) do less testing.  Since they
probably wouldn't get a compile or runtime error, just incorrect
results, I think it would take longer to notice the error.

-Mark



More information about the Python-list mailing list