[Python-Dev] Deprecating float.is_integer()

Tim Peters tim.peters at gmail.com
Wed Mar 21 22:29:16 EDT 2018


[Chris Barker <chris.barker at noaa.gov>]
> ...
> ... "is it the "right" thing to do in most cases, when deployed by folks
> that haven't thought deeply about floating point.

Gimme a break ;-)  Even people who _believe_ they've thought about
floating point still litter the bug tracker with

>>> .1 + .2
0.30000000000000004

"bug reports".  .is_integer() is easy to explain compared to that  -
and you have to go out of your way to use it.

> ...
> I think pretty much all the real world code that's been shown here for using
> .is_integer() is really about type errors (issues). The function at hand
> really wants integer inputs -- but wants to allow the user to be sloppy and
> provide a float type that happens to be an int. Given Python's duck-typing
> nature, maybe that's a good thing? I know I really discourage dynamic type
> checking....

So you identified a use case.  One you don't approve of (nor do I),
but not strongly enough to demand they suffer instead ;-)


> Also, every example has been for small-ish integers -- exponents,
> factorials, etc -- not order 1e300 -- or inf or NaN, etc.
>
> Finally, the use-cases where the value that happens-to-be-an-int is computed
> via floating point -- .is_integer() is probably the wrong check -- you
> probably want  isclose().

Everyone who has implemented a production math library can recall
cases where the functionality was needed.  Here, that includes at
least Stefan Krah and me.  You could also follow the link from Mark
Dickinson to SciPy's implementation of the beta function.

In every case I've needed the functionality, isclose() would have been
utterly useless.  Behold:

>>> (-1.0) ** 3.0
-1.0
>>> (-1.0) ** 3.000000000001  # different result _type_
(-1-3.142007854859299e-12j)
>>> math.isclose(3.0, 3.000000000001)
True

And another showing that the same functionality is needed regardless
of how large the power:

>>> (-1.0) ** 1e300  # an even integer power
1.0

When implementing an externally defined standard, when it says "and if
such-and-such is an integer ...", it _means_ exactly an integer, not
"or a few ULP away from an integer".  IEEE pow()-like functions
bristle with special cases for integers.

>>> (-math.inf) ** 3.1
inf
>>> (-math.inf) ** 3.0 # note: this one has a negative result (odd integer power)
-inf
>>> (-math.inf) ** 2.9
inf


> ...
> All this points to: we don't need .is_integer

I'll grant that you don't think you need it.  So don't use it ;-)


> All the being said -- the standard for depreciation is much higher bar than
> not-adding-it-in-the-first-place.

I would not have added it as a method to begin with - but I agree with
Guido that it doesn't reach the bar for deprecation.  The only
examples of "bad" uses we saw were from people still so naive about
floating-point behavior that they'll easily fall into other ways to
get it wrong.  What we haven't seen:  a single person here saying "you
know, I think _I'd_ be seduced into misusing it!".  It's not
_inherently_ confusing at all.


More information about the Python-Dev mailing list