Unexpected behaviour of math.floor, round and int functions (rounding)

Ben Bacarisse ben.usenet at bsb.me.uk
Fri Nov 19 20:24:32 EST 2021


Chris Angelico <rosuav at gmail.com> writes:

> On Sat, Nov 20, 2021 at 9:07 AM Ben Bacarisse <ben.usenet at bsb.me.uk> wrote:
>>
>> Chris Angelico <rosuav at gmail.com> writes:
>>
>> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast at invalid> wrote:
>>
>> >>  >>> 0.3 + 0.3 + 0.3 == 0.9
>> >> False
>> >
>> > That's because 0.3 is not 3/10. It's not because floats are
>> > "unreliable" or "inaccurate". It's because the ones you're entering
>> > are not what you think they are.
>> >
>> > When will people understand this?
>> >
>> > (Probably never. Sigh.)
>>
>> Most people understand what's going on when it's explained to them.  And
>> I think that being initially baffled is not unreasonable.  After all,
>> almost everyone comes to computers after learning that 3/10 can be
>> written as 0.3.  And Python "plays along" with the fiction to some
>> extent.  0.3 prints as 0.3, 3/10 prints as 0.3 and 0.3 == 3/10 is True.
>
> In grade school, we learn that not everything can be written that way,
> and 1/3 isn't actually equal to 0.3333333333.

Yes.  We learn early on that 0.3333333333 means 3333333333/10000000000.
We don't learn that 0.3333333333 is a special notation for machines that
have something called "binary floating point hardware" that does not
mean 3333333333/10000000000.  That has to be learned later.  And every
generation has to learn it afresh.

> Yet somehow people
> understand that computers speak binary ("have you learned to count
> yet, or are you still on zeroes and ones?" -- insult to a machine
> empire, in Stellaris), but don't seem to appreciate that floats are
> absolutely accurate and reliable, just in binary.

Yes, agreed, but I was not commenting on the odd (and incorrect) view
that floating point operations are not reliable and well-defined, but on
the reasonable assumption that a clever programming language might take
0.3 to mean what I was taught it meant in grade school.

As an old hand, I know it won't (in most languages), and I know why it
won't, and I know why that's usually the right design choice for the
language.  But I can also appreciate that it's by no means obvious that,
to a beginner, "binary" implies the particular kind of representation
that makes 0.3 not mean 3/10.  After all, the rational 3/10 can be
represented exactly in binary in many different ways.

> But lack of knowledge is never a problem. (Or rather, it's a solvable
> problem, and I'm always happy to explain things to people.) The
> problem is when, following that lack of understanding, people assume
> that floats are "unreliable" or "inaccurate", and that you should
> never ever compare two floats for equality, because they're never
> "really equal". That's what leads to horrible coding practices and
> badly-defined "approximately equal" checks that cause far more harm
> than a simple misunderstanding ever could on its own.

Agreed.  Often, the "explanations" just make things worse.

-- 
Ben.


More information about the Python-list mailing list