[Python-ideas] Deprecate the round builtin

Steven D'Aprano steve at pearwood.info
Thu Sep 27 03:46:52 CEST 2012


On 27/09/12 10:32, Bill Janssen wrote:
> MRAB<python at mrabarnett.plus.com>  wrote:
>
>> If we're going to move 'round' into 'math' because it sometimes gives
>> results which puzzle naive users, shouldn't we do the same to 'float'?
>> :-)
>
> Ah, but float is a type, not a function.

What difference does that make? Floats sometimes give results which puzzle
naive users.

py> L = [1/10]*10
py> print(L)
[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
py> sum(L)  # should be 1.0
0.9999999999999999

"Why doesn't Python add up correctly?"

And so we have to explain that the float 0.1 is not actually 0.1 because
you can't represent 0.1 as a finite binary fraction, due to some fairly
subtle mathematics that goes right past most people. The float 0.1 is
actually a tiny bit *larger* than the decimal 0.1, but when you add ten of
them together, you end up with a number that it a tiny bit *smaller* than
the expected result.

Don't you just love binary floating point?

Matthew used a smiley there, but I think there is a very strong case for
making the default floating point numeric type Decimal rather than float.
Decimals behave more like people expect, and they interact better with
other numeric types (ints and Fractions) than floats.

Obviously this will be a big change, almost certainly requiring a PEP and
a long lead time, e.g. Decimal could become a built-in in Python 3.4, then
in 3.5 you could do "from __future__ import decimal_floats", and in 3.6 it
could be the standard behaviour, or perhaps a runtime flag to switch between
binary and decimal floats.

I hate to think how much work would be involved.



-- 
Steven



More information about the Python-ideas mailing list