Unexpected behaviour of math.floor, round and int functions (rounding)

Ben Bacarisse ben.usenet at bsb.me.uk
Fri Nov 19 22:25:53 EST 2021


Chris Angelico <rosuav at gmail.com> writes:

> On Sat, Nov 20, 2021 at 12:43 PM Ben Bacarisse <ben.usenet at bsb.me.uk> wrote:
>>
>> Chris Angelico <rosuav at gmail.com> writes:
>>
>> > On Sat, Nov 20, 2021 at 9:07 AM Ben Bacarisse <ben.usenet at bsb.me.uk> wrote:
>> >>
>> >> Chris Angelico <rosuav at gmail.com> writes:
>> >>
>> >> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast at invalid> wrote:
>> >>
>> >> >>  >>> 0.3 + 0.3 + 0.3 == 0.9
>> >> >> False
>> >> >
>> >> > That's because 0.3 is not 3/10. It's not because floats are
>> >> > "unreliable" or "inaccurate". It's because the ones you're entering
>> >> > are not what you think they are.
>> >> >
>> >> > When will people understand this?
>> >> >
>> >> > (Probably never. Sigh.)
>> >>
>> >> Most people understand what's going on when it's explained to them.  And
>> >> I think that being initially baffled is not unreasonable.  After all,
>> >> almost everyone comes to computers after learning that 3/10 can be
>> >> written as 0.3.  And Python "plays along" with the fiction to some
>> >> extent.  0.3 prints as 0.3, 3/10 prints as 0.3 and 0.3 == 3/10 is True.
>> >
>> > In grade school, we learn that not everything can be written that way,
>> > and 1/3 isn't actually equal to 0.3333333333.
>>
>> Yes.  We learn early on that 0.3333333333 means 3333333333/10000000000.
>> We don't learn that 0.3333333333 is a special notation for machines that
>> have something called "binary floating point hardware" that does not
>> mean 3333333333/10000000000.  That has to be learned later.  And every
>> generation has to learn it afresh.
>
> But you learn that it isn't the same as 1/3. That's my point. You
> already understand that it is *impossible* to write out 1/3 in
> decimal. Is it such a stretch to discover that you cannot write 3/10
> in binary?
>
> Every generation has to learn about repeating fractions, but most of
> us learn them in grade school. Every generation learns that computers
> talk in binary. Yet, putting those two concepts together seems beyond
> many people, to the point that they feel that floating point can't be
> trusted.

Binary is a bit of a red herring here.  It's the floating point format
that needs to be understood.  Three tenths can be represented in many
binary formats, and even decimal floating point will have some surprises
for the novice.

>> Yes, agreed, but I was not commenting on the odd (and incorrect) view
>> that floating point operations are not reliable and well-defined, but on
>> the reasonable assumption that a clever programming language might take
>> 0.3 to mean what I was taught it meant in grade school.
>
> It does mean exactly what it meant in grade school, just as 1/3 means
> exactly what it meant in grade school. Now try to represent 1/3 on a
> blackboard, as a decimal fraction. If that's impossible, does it mean
> that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?

As you know, it is possible, but let's say we outlaw any finite notation
for repeated digits...  Why should I convert 1/3 to this particular
apparently unsuitable representation?  I will write 1/3 and manipulate
that number using factional notation.

The novice programmer might similarly expect that when they write 0.3,
the program will manipulate that number as the faction it clearly is.
They may well be surprised by the fact that it must get put into a
format that can't represent what those three characters mean, just as I
would be surprised if you insisted I write 1/3 as a finite decimal (with
no repeat notation).

I'm not saying your analogy would not help someone understand, but you
first have to explain why 0.3 is not treated as three tenths -- why I
(to use your analogy) must not keep 1/3 as a proper fraction, but I must
instead write it using a finite number of decimal digits.  Neither is,
in my view, obvious to the beginner.

>> > But lack of knowledge is never a problem. (Or rather, it's a solvable
>> > problem, and I'm always happy to explain things to people.) The
>> > problem is when, following that lack of understanding, people assume
>> > that floats are "unreliable" or "inaccurate", and that you should
>> > never ever compare two floats for equality, because they're never
>> > "really equal". That's what leads to horrible coding practices and
>> > badly-defined "approximately equal" checks that cause far more harm
>> > than a simple misunderstanding ever could on its own.
>>
>> Agreed.  Often, the "explanations" just make things worse.
>
> When they're based on a fear of floats, yes. Explanations like "never
> use == with floats because 0.1+0.2!=0.3" are worse than useless,
> because they create that fear in a way that creates awful cargo-cult
> programming practices.
>
> If someone does something in Python, gets a weird result, and comes to
> the list saying "I don't understand this", that's not a problem. We
> get it all the time with mutables. Recent question about frozensets
> appearing to be mutable, same thing. I have no problem with someone
> humbly asking "what's happening?", based on an internal assumption
> that there's a reason things are the way they are. For some reason,
> floats don't get that same respect from many people.

On all this, I agree.  As a former numerical analyst, I want maximal
respect for all the various floating-point representations!

-- 
Ben.


More information about the Python-list mailing list