[Numpy-discussion] Odd-looking long double on windows 32 bit

Matthew Brett matthew.brett at gmail.com
Tue Nov 15 01:22:50 EST 2011


Hi,

On Mon, Nov 14, 2011 at 10:08 PM, David Cournapeau <cournape at gmail.com> wrote:
> On Mon, Nov 14, 2011 at 9:01 PM, Matthew Brett <matthew.brett at gmail.com> wrote:
>> Hi,
>>
>> On Sun, Nov 13, 2011 at 5:03 PM, Charles R Harris
>> <charlesr.harris at gmail.com> wrote:
>>>
>>>
>>> On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett <matthew.brett at gmail.com>
>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
>>>> <charlesr.harris at gmail.com> wrote:
>>>> >
>>>> >
>>>> > On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett <matthew.brett at gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Hi,
>>>> >>
>>>> >> On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
>>>> >> <charlesr.harris at gmail.com> wrote:
>>>> >> >
>>>> >> >
>>>> >> > On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
>>>> >> > <matthew.brett at gmail.com>
>>>> >> > wrote:
>>>> >> >>
>>>> >> >> Hi,
>>>> >> >>
>>>> >> >> On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
>>>> >> >> <matthew.brett at gmail.com>
>>>> >> >> wrote:
>>>> >> >> > Hi,
>>>> >> >> >
>>>> >> >> > Sorry for my continued confusion here.  This is numpy 1.6.1 on
>>>> >> >> > windows
>>>> >> >> > XP 32 bit.
>>>> >> >> >
>>>> >> >> > In [2]: np.finfo(np.float96).nmant
>>>> >> >> > Out[2]: 52
>>>> >> >> >
>>>> >> >> > In [3]: np.finfo(np.float96).nexp
>>>> >> >> > Out[3]: 15
>>>> >> >> >
>>>> >> >> > In [4]: np.finfo(np.float64).nmant
>>>> >> >> > Out[4]: 52
>>>> >> >> >
>>>> >> >> > In [5]: np.finfo(np.float64).nexp
>>>> >> >> > Out[5]: 11
>>>> >> >> >
>>>> >> >> > If there are 52 bits of precision, 2**53+1 should not be
>>>> >> >> > representable, and sure enough:
>>>> >> >> >
>>>> >> >> > In [6]: np.float96(2**53)+1
>>>> >> >> > Out[6]: 9007199254740992.0
>>>> >> >> >
>>>> >> >> > In [7]: np.float64(2**53)+1
>>>> >> >> > Out[7]: 9007199254740992.0
>>>> >> >> >
>>>> >> >> > If the nexp is right, the max should be higher for the float96
>>>> >> >> > type:
>>>> >> >> >
>>>> >> >> > In [9]: np.finfo(np.float64).max
>>>> >> >> > Out[9]: 1.7976931348623157e+308
>>>> >> >> >
>>>> >> >> > In [10]: np.finfo(np.float96).max
>>>> >> >> > Out[10]: 1.#INF
>>>> >> >> >
>>>> >> >> > I see that long double in C is 12 bytes wide, and double is the
>>>> >> >> > usual
>>>> >> >> > 8
>>>> >> >> > bytes.
>>>> >> >>
>>>> >> >> Sorry - sizeof(long double) is 12 using mingw.  I see that long
>>>> >> >> double
>>>> >> >> is the same as double in MS Visual C++.
>>>> >> >>
>>>> >> >> http://en.wikipedia.org/wiki/Long_double
>>>> >> >>
>>>> >> >> but, as expected from the name:
>>>> >> >>
>>>> >> >> In [11]: np.dtype(np.float96).itemsize
>>>> >> >> Out[11]: 12
>>>> >> >>
>>>> >> >
>>>> >> > Hmm, good point. There should not be a float96 on Windows using the
>>>> >> > MSVC
>>>> >> > compiler, and the longdouble types 'gG' should return float64 and
>>>> >> > complex128
>>>> >> > respectively. OTOH, I believe the mingw compiler has real float96
>>>> >> > types
>>>> >> > but
>>>> >> > I wonder about library support. This is really a build issue and it
>>>> >> > would be
>>>> >> > good to have some feedback on what different platforms are doing so
>>>> >> > that
>>>> >> > we
>>>> >> > know if we are doing things right.
>>>> >>
>>>> >> Is it possible that numpy is getting confused by being compiled with
>>>> >> mingw on top of a visual studio python?
>>>> >>
>>>> >> Some further forensics seem to suggest that, despite the fact the math
>>>> >> suggests float96 is float64, the storage format it in fact 80-bit
>>>> >> extended precision:
>>>> >>
>>>> >
>>>> > Yes, extended precision is the type on Intel hardware with gcc, the
>>>> > 96/128
>>>> > bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
>>>> > and
>>>> > long double are both ieee double, and on SPARC, long double is ieee quad
>>>> > precision.
>>>>
>>>> Right - but I think my researches are showing that the longdouble
>>>> numbers are being _stored_ as 80 bit, but the math on those numbers is
>>>> 64 bit.
>>>>
>>>> Is there a reason than numpy can't do 80-bit math on these guys?  If
>>>> there is, is there any point in having a float96 on windows?
>>>
>>> It's a compiler/architecture thing and depends on how the compiler
>>> interprets the long double c type. The gcc compiler does do 80 bit math on
>>> Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
>>> produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
>>> compiler, so the numbers are there, but if it uses the MS library it will
>>> have to convert them to double to do computations like sin(x) since there
>>> are no microsoft routines for extended precision. I suspect that gcc/ms
>>> combo is what is producing the odd results you are seeing.
>>
>> I think we might be talking past each other a bit.
>>
>> It seems to me that, if float96 must use float64 math, then it should
>> be removed from the numpy namespace, because
>
> If we were to do so, it would break too much code.

David - please - obviously I'm not suggesting removing it without
deprecating it.

>> a) It implies higher precision than float64 but does not provide it
>> b) It uses more memory to no obvious advantage
>
> There is an obvious advantage: to handle memory blocks which use long
> double, created outside numpy (or even python).

Right - but that's a bit arcane, and I would have thought
np.longdouble would be a good enough name for that.   Of course, the
users may be surprised, as I was, that memory allocated for higher
precision is using float64, and that may take them some time to work
out.  I'll say again that 'longdouble' says to me 'something specific
to the compiler' and 'float96' says 'something standard in numpy', and
that I - was surprised - when I found out what it was.

> Otherwise, while gcc indeed supports long double, the fact that the C
> runtime doesn't really mean it is hopeless to reach any kind of
> consistency.

I'm sorry for my ignorance, but which numerical algorithms come from
the C runtime?   I guess not addition from the snippet in my last
mail.  Is the numpy code deliberately using float64 for everything on
the basis that it may not be able to use 80 bit precision for some
things?

> And I will reiterate what I said before about long
> double: if you care about your code behaving consistency across
> platforms, just forget about long double.

Do you agree that the current state of float96 on Windows is hard to understand?

If so, what do you think we can do to improve it?

Best,

Matthew



More information about the NumPy-Discussion mailing list