Inverse confusion about floating point precision

Tim Peters tim.peters at gmail.com
Mon May 9 13:51:53 EDT 2005


[Skip Montanaro]
> I understand why the repr() of float("95.895") is "95.894999999999996".
> What I don't understand is why if I multiply the best approximation to
> 95.895 that the machine has by 10000 I magically seem to get the lost
> precision back.  To wit:
>
>    % python
>    Python 2.3.4 (#12, Jul  2 2004, 09:48:10)
>    [GCC 3.3.2] on sunos5
>    Type "help", "copyright", "credits" or "license" for more information.
>    >>> 95.895
>    95.894999999999996
>    >>> 95.895 * 10000
>    958950.0
>
> Why isn't the last result "958949.99999999996"?

Because it's *still* not decimal arithmetic.  You have 53 significant
bits in the approximation to 95.895, and "958949.99999999996" is
itself a decimal approximation to the exact binary value stored (read
the Tutorial appendix on fp issues for more on that).

There are 14 significant bits in 10000.  The product thus has 53+14 =
67, or 53+14-1 = 66, significant bits, and has to be rounded to fit
back into 53 significant bits.  None of that happens in base 10.

> IOW, how'd I get back the lost bits?

It happened to round up.  Here's a simpler example, where it happens
to round down instead:

>>> .1
0.10000000000000001
>>> .1 * 10
1.0



More information about the Python-list mailing list