[Numpy-discussion] float128 in fact float80

Matthew Brett matthew.brett at gmail.com
Mon Oct 17 14:39:50 EDT 2011


Hi,

On Sun, Oct 16, 2011 at 6:22 PM, Charles R Harris
<charlesr.harris at gmail.com> wrote:
>
>
> On Sun, Oct 16, 2011 at 6:13 PM, Nathaniel Smith <njs at pobox.com> wrote:
>>
>> On Sun, Oct 16, 2011 at 4:29 PM, Charles R Harris
>> <charlesr.harris at gmail.com> wrote:
>> > On Sun, Oct 16, 2011 at 4:16 PM, Nathaniel Smith <njs at pobox.com> wrote:
>> >> I understand the argument that you don't want to call it "float80"
>> >> because not all machines support a float80 type. But I don't
>> >> understand why we would solve that problem by making up two *more*
>> >> names (float96, float128) that describe types that *no* machines
>> >> actually support... this is incredibly confusing.
>> >
>> > Well, float128 and float96 aren't interchangeable across architectures
>> > because of the different alignments, C long double isn't portable
>> > either,
>> > and float80 doesn't seem to be available anywhere. What concerns me is
>> > the
>> > difference between extended and quad precision, both of which can occupy
>> > 128
>> > bits. I've complained about that for several years now, but as to
>> > extended
>> > precision, just don't use it. It will never be portable.
>>
>> I think part of the confusion here is about when a type is named like
>> 'float<N>', does 'N' refer to the size of the data or to the minimum
>> alignment? I have a strong intuition that it should be the former, and
>> I assume Matthew does too. If we have a data structure like
>>  struct { uint8_t flags; void * data; }
>
> We need both in theory, in practice floats and doubles are pretty well
> defined these days, but long doubles depend on architecture and compiler for
> alignment, and even for representation in the case of PPC. I don't regard
> these facts as obscure if one is familiar with floating point, but most
> folks aren't and I agree that it can be misleading if one assumes that types
> and storage space are strongly coupled. This also ties in to the problem
> with ints and longs, which may both be int32 despite having different C
> names.
>
>>
>> then 'flags' will actually get 32 or 64 bits of space... but we would
>> never, ever refer to it as a uint32 or a uint64! I know these extended
>> precision types are even weirder because the compiler will insert that
>> padding unconditionally, but the intuition still stands, and obviously
>> some proportion of the userbase will share it.
>>
>> If our API makes smart people like Matthew spend a week going around
>> in circles, then our API is dangerously broken!
>>
>
> I think "dangerously" is a bit overly dramatic.
>
>>
>> The solution is just to call it 'longdouble', which clearly
>> communicates 'this does some quirky thing that depends on your C
>> compiler and architecture'.
>>
>
> Well, I don't know. If someone is unfamiliar with floats I would expect they
> would post a complaint about bugs if a file of longdouble type written on a
> 32 bit system couldn't be read on a 64 bit system. It might be better to
> somehow combine both the ieee type and the storage alignment.

David was pointing out that e.g. np.float128 could be a different
thing on SPARC, PPC and Intel, so it seems to me that the float128 is
a false friend if we think it at all likely that people will use
platforms other than Intel.

Personally, if I saw 'longdouble' as a datatype, it would not surprise
me if it wasn't portable across platforms, including 32 and 64 bit.

float80_96 and float80_128 seem fine to me, but it would also be good
to suggest longdouble as the default name to use for the
platform-specific higher-precision datatype to make code portable
across platforms.

See you,

Matthew



More information about the NumPy-Discussion mailing list