[Numpy-discussion] Unexpected float96 precision loss

Charles R Harris charlesr.harris at gmail.com
Wed Sep 1 17:13:07 EDT 2010


On Wed, Sep 1, 2010 at 2:26 PM, Michael Gilbert <michael.s.gilbert at gmail.com
> wrote:

> Hi,
>
> I've been using numpy's float96 class lately, and I've run into some
> strange precision errors.  See example below:
>
>  >>> import numpy
>  >>> numpy.version.version
>  '1.5.0'
>  >>> sys.version
>  '3.1.2 (release31-maint, Jul  8 2010, 01:16:48) \n[GCC 4.4.4]'
>  >>> x = numpy.array( [0.01] , numpy.float32 )
>  >>> y = numpy.array( [0.0001] , numpy.float32 )
>  >>> x[0]*x[0] - y[0]
>  0.0
>  >>> x = numpy.array( [0.01] , numpy.float64 )
>  >>> y = numpy.array( [0.0001] , numpy.float64 )
>  >>> x[0]*x[0] - y[0]
>  0.0
>  >>> x = numpy.array( [0.01] , numpy.float96 )
>  >>> y = numpy.array( [0.0001] , numpy.float96 )
>  >>> x[0]*x[0] - y[0]
>  -6.286572655403010329e-22
>
> I would expect the float96 calculation to also produce 0.0 exactly as
> found in the float32 and float64 examples.  Why isn't this the case?
>
>
None of the numbers is exactly represented in ieee floating format, so what
you are seeing is rounding error. Note that the first two zeros are only
accurate to about 7 and 16 digits respectively, whereas fot float 96 is
accurate to about  19 digits.

Slightly off-topic: why was the float128 class dropped?
>
>
It wasn't, but you won't see it on a 32 bit system because of how the gcc
compiler treats long doubles for alignment reasons. For common intel
hardware/os,  float96, and float128 are the same precision, just stored
differently. In general the long precision formats are not portable, so
watch out.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20100901/9e09ebf9/attachment.html>


More information about the NumPy-Discussion mailing list