Floating point equality [was Re: What exactly is "exact" (was Clean Singleton Docstrings)]

Rustom Mody rustompmody at gmail.com
Thu Jul 21 01:38:10 EDT 2016


On Wednesday, July 20, 2016 at 8:29:25 PM UTC+5:30, Marko Rauhamaa wrote:
> Chris Angelico :
> 
> > On Wed, Jul 20, 2016 at 11:54 PM, Marko Rauhamaa  wrote:
> >>  2. Floating-point numbers are *imperfect approximations* of real
> >>     numbers. Even when real numbers are derived exactly,
> >>     floating-point operations may introduce "lossy compression
> >>     artifacts" that have to be compensated for in application
> >>     programs.
> >
> > This is the kind of black FUD that has to be fought off. What
> > "compression artifacts" are introduced? The *only* lossiness in IEEE
> > binary floating-point arithmetic is rounding.
> 
> You are joining me in spreading the FUD. Yes, the immediate lossiness is
> rounding, but the effects of that rounding can result in atrocious
> accumulative errors in numeric calculations.
> 
> > Unless you are working with numbers that require more precision than
> > you have available, the result should be perfectly accurate.
> 
> Whoa, hold it there! Catastrophic cancellation (<URL:
> https://en.wikipedia.org/wiki/Loss_of_significance>) is not a myth:

Whose lead para starts:


| Catastrophic cancellation… The effect is that the number of accurate 
| (significant) digits in the result is reduced unacceptably. Ways to avoid this 
| effect are studied in numerical analysis.

I would go a step further:
The field of numerical analysis came into existence only because this fact
multiplied by the fact that computers do their (inaccurate ≠ inexact) computations
billions of times faster than we do
makes significance a very significant problem!



More information about the Python-list mailing list