integer >= 1 == True and integer.0 == False is bad, bad, bad!!!

Paul Rubin no.email at nospam.invalid
Mon Jul 12 20:36:38 EDT 2010


Steven D'Aprano <steve at REMOVE-THIS-cybersource.com.au> writes:
> This is why virtually all low-level languages treat 0 as a false ...

OK, but is Python a low-level language, and if not, why are low-level
languages appropriate examples to follow?

>> if myInt <> 0:
>
> I would argue against that. Why do you, the coder, care about the 
> specific details of treating ints in a boolean context? The int type 
> itself knows, leave the decision to it.

There is a horrible (IMO) thing that Perl, Lua, and Javascript all do,
which is automatically convert strings to numbers, so "12"+3 = 15.
Python has the good sense to throw a type error if you attempt such an
addition, but it goes and converts various types to bool automatically,
which is more Perl-like than I really enjoy.  In fact Python 3 added yet
another automatic conversion, of int to float on division, so 1/2 = 0.5.

Obviously it's a point in the design space where various decisions are
possible, but I can get behind the "explicit is better than implicit"
idea and say that none of those conversions should be automatic, and if
1/2 = 0 was confusing people in Python 2 enough to justify changing
divison semantics in Python 3, the preferable change would be for int
division to raise TypeError rather than quietly converting to float.

The same thing goes for bool conversion, though we are all used to it by
now, and it would be way too disruptive a change.

> if myValue.count <= 0 and myValue.next is None and myValue.blob == "":
>     # myValue is considered false
> versus:
>     if not myValue:
> Which would you prefer?

I'd personally prefer

    if not bool(myValue): 

which would call the myValue's __bool__ method if it chose to implement
one.  Explicit is better than implicit.



More information about the Python-list mailing list