[Numpy-discussion] Importance of order when summing values in anarray

Charles R Harris charlesr.harris at gmail.com
Tue Dec 9 23:01:56 EST 2008


On Tue, Dec 9, 2008 at 8:10 PM, Robert Kern <robert.kern at gmail.com> wrote:

> On Tue, Dec 9, 2008 at 21:01, Charles R Harris
> <charlesr.harris at gmail.com> wrote:
> >
> >
> > On Tue, Dec 9, 2008 at 1:40 PM, Robert Kern <robert.kern at gmail.com>
> wrote:
> >>
> >> On Tue, Dec 9, 2008 at 09:51, Nadav Horesh <nadavh at visionsense.com>
> wrote:
> >> > As much as I know float128 are in fact 80 bits (64 mantissa + 16
> >> > exponent) so the precision is 18-19 digits (not 34)
> >>
> >> float128 should be 128 bits wide. If it's not on your platform, please
> >> let us know as that is a bug in your build.
> >
> > I think he means the actual precision is the ieee extended precision, the
> > number just happens to be stored into larger chunks of memory for
> alignment
> > purposes.
>
> Ah, that's good to know. Yes, float128 on my Intel Mac behaves this way.
>
> In [12]: f = finfo(float128)
>
> In [13]: f.nmant
> Out[13]: 63
>
> In [14]: f.nexp
> Out[14]: 15
>

Yep. That's the reason I worry a bit about what will happen when ieee quad
precision comes out; it really is 128 bits wide and the normal identifiers
won't account for the difference. I expect c will just call them long
doubles and they will get the 'g' letter code just like extended precision
does now.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20081209/8798a20c/attachment.html>


More information about the NumPy-Discussion mailing list