Floating point equality [was Re: What exactly is "exact" (was Clean Singleton Docstrings)]

Marko Rauhamaa marko at pacujo.net
Wed Jul 20 10:59:11 EDT 2016


Chris Angelico <rosuav at gmail.com>:

> On Wed, Jul 20, 2016 at 11:54 PM, Marko Rauhamaa <marko at pacujo.net> wrote:
>>  2. Floating-point numbers are *imperfect approximations* of real
>>     numbers. Even when real numbers are derived exactly,
>>     floating-point operations may introduce "lossy compression
>>     artifacts" that have to be compensated for in application
>>     programs.
>
> This is the kind of black FUD that has to be fought off. What
> "compression artifacts" are introduced? The *only* lossiness in IEEE
> binary floating-point arithmetic is rounding.

You are joining me in spreading the FUD. Yes, the immediate lossiness is
rounding, but the effects of that rounding can result in atrocious
accumulative errors in numeric calculations.

> Unless you are working with numbers that require more precision than
> you have available, the result should be perfectly accurate.

Whoa, hold it there! Catastrophic cancellation (<URL:
https://en.wikipedia.org/wiki/Loss_of_significance>) is not a myth:

   >>> 0.2 / (0.2 - 0.1)
   2.0
   >>> 0.2 / ((2e15 + 0.2) - (2e15 + 0.1))
   0.8

You can fall victim to the phenomenon when you collect statistics over a
long time. The cumulative sum of a measurement can grow very large,
which causes the naïve per-second rate calculation to become
increasingly bogus.


Marko



More information about the Python-list mailing list