Division considered un-Pythonic (Re: Case-sensitivity: why -- or why not? (was Re: Damnation!))

Piet van Oostrum piet at cs.uu.nl
Sun Jun 4 11:52:59 EDT 2000


>>>>> "Rainer Deyke" <root at rainerdeyke.com> (RD) writes:

RD> <piet at cs.uu.nl> wrote in message news:u1z2gl8zu.fsf at cs.uu.nl...
>> I want to add another reason for a/b to mean floating division:
>> 
>> As it is now in Python, you can have a==b and c==d both being true, but
>> a/c==b/d being false. Which would be a bad surprise for most people.

RD> But, with floating point numbers, you can have (a + b) - b == a be false.

I would argue that this is a different kind of unequalness, it has to do
with accuracy. In the integer div case it is a semantic difference.

RD> The only way to avoid this sort of situation is to avoid floating points
RD> entirely.  I don't want my programs to become infested with floats just
RD> because of one innocent division.

Therefore Python should offer a div operator for integer division. (Or use
int()).
-- 
Piet van Oostrum <piet at cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: P.van.Oostrum at hccnet.nl



More information about the Python-list mailing list