Unexpected behaviour of math.floor, round and int functions (rounding)

Chris Angelico rosuav at gmail.com
Fri Nov 19 20:51:04 EST 2021


On Sat, Nov 20, 2021 at 12:43 PM Ben Bacarisse <ben.usenet at bsb.me.uk> wrote:
>
> Chris Angelico <rosuav at gmail.com> writes:
>
> > On Sat, Nov 20, 2021 at 9:07 AM Ben Bacarisse <ben.usenet at bsb.me.uk> wrote:
> >>
> >> Chris Angelico <rosuav at gmail.com> writes:
> >>
> >> > On Sat, Nov 20, 2021 at 5:08 AM ast <ast at invalid> wrote:
> >>
> >> >>  >>> 0.3 + 0.3 + 0.3 == 0.9
> >> >> False
> >> >
> >> > That's because 0.3 is not 3/10. It's not because floats are
> >> > "unreliable" or "inaccurate". It's because the ones you're entering
> >> > are not what you think they are.
> >> >
> >> > When will people understand this?
> >> >
> >> > (Probably never. Sigh.)
> >>
> >> Most people understand what's going on when it's explained to them.  And
> >> I think that being initially baffled is not unreasonable.  After all,
> >> almost everyone comes to computers after learning that 3/10 can be
> >> written as 0.3.  And Python "plays along" with the fiction to some
> >> extent.  0.3 prints as 0.3, 3/10 prints as 0.3 and 0.3 == 3/10 is True.
> >
> > In grade school, we learn that not everything can be written that way,
> > and 1/3 isn't actually equal to 0.3333333333.
>
> Yes.  We learn early on that 0.3333333333 means 3333333333/10000000000.
> We don't learn that 0.3333333333 is a special notation for machines that
> have something called "binary floating point hardware" that does not
> mean 3333333333/10000000000.  That has to be learned later.  And every
> generation has to learn it afresh.

But you learn that it isn't the same as 1/3. That's my point. You
already understand that it is *impossible* to write out 1/3 in
decimal. Is it such a stretch to discover that you cannot write 3/10
in binary?

Every generation has to learn about repeating fractions, but most of
us learn them in grade school. Every generation learns that computers
talk in binary. Yet, putting those two concepts together seems beyond
many people, to the point that they feel that floating point can't be
trusted.

> Yes, agreed, but I was not commenting on the odd (and incorrect) view
> that floating point operations are not reliable and well-defined, but on
> the reasonable assumption that a clever programming language might take
> 0.3 to mean what I was taught it meant in grade school.

It does mean exactly what it meant in grade school, just as 1/3 means
exactly what it meant in grade school. Now try to represent 1/3 on a
blackboard, as a decimal fraction. If that's impossible, does it mean
that 1/3 doesn't mean 1/3, or that 1/3 can't be represented?

> > But lack of knowledge is never a problem. (Or rather, it's a solvable
> > problem, and I'm always happy to explain things to people.) The
> > problem is when, following that lack of understanding, people assume
> > that floats are "unreliable" or "inaccurate", and that you should
> > never ever compare two floats for equality, because they're never
> > "really equal". That's what leads to horrible coding practices and
> > badly-defined "approximately equal" checks that cause far more harm
> > than a simple misunderstanding ever could on its own.
>
> Agreed.  Often, the "explanations" just make things worse.
>

When they're based on a fear of floats, yes. Explanations like "never
use == with floats because 0.1+0.2!=0.3" are worse than useless,
because they create that fear in a way that creates awful cargo-cult
programming practices.

If someone does something in Python, gets a weird result, and comes to
the list saying "I don't understand this", that's not a problem. We
get it all the time with mutables. Recent question about frozensets
appearing to be mutable, same thing. I have no problem with someone
humbly asking "what's happening?", based on an internal assumption
that there's a reason things are the way they are. For some reason,
floats don't get that same respect from many people.

ChrisA


More information about the Python-list mailing list