What exactly is "exact" (was Clean Singleton Docstrings)

Chris Angelico rosuav at gmail.com
Mon Jul 18 06:15:10 EDT 2016


On Mon, Jul 18, 2016 at 8:00 PM, Marko Rauhamaa <marko at pacujo.net> wrote:
> Python programmers (among others) frequently run into issues with
> surprising results in floating-point arithmetics. For better or worse,
> Scheme has tried to abstract the concept. You don't need to explain the
> ideas of IEEE 64-bit floating-point numbers or tie the hands of the
> implementation. Instead, what you have is "reliable" arithmetics and
> "best-effort" arithmetics, a bit like TCP is "reliable" and UDP is
> "best-effort".

The problem with that is that failing to explain IEEE floating point
and just calling it "inexact" scares people off unnecessarily. I've
seen a lot of very intelligent people who think that you should never
compare floats with the == operator, because floats randomly introduce
"inaccuracy". And then you get these sorts of functions:

EPSILON = 0.000001 # Adjust to control numeric accuracy
def is_equal(f1, f2, epsilon=EPSILON):
    if abs(f1) > abs(f2):
        f1, f2 = f2, f1
    return abs(f2-f1) < f1*epsilon

and interminable debates about how to pick an epsilon, whether it
should be relative to the smaller value (as here) or the larger (use
f2 instead), or maybe should be an absolute value, or maybe it should
be relative to the largest/smallest value that was ever involved in
the calculation, or........

Floating point numbers are a representation of real numbers that
involves a certain amount of precision. They're ultimately no
different from grade-school arithmetic where you round stuff off so
you don't need an infinite amount of paper, except that they work with
binary rather than decimal, so people think "0.1 + 0.2 ought to be
exactly 0.3, why isn't it??", and blame floats.

Explain what they REALLY do and how they work, and you'll find they're
not so scary.

ChrisA



More information about the Python-list mailing list