[Numpy-discussion] Preventing lossy cast for new float dtypes ?

Frédéric Bastien nouiz at nouiz.org
Mon Aug 20 09:42:54 EDT 2012


On Sat, Aug 18, 2012 at 9:28 AM, David Cournapeau <cournape at gmail.com> wrote:
> Hi,
>
> I have started toying with implementing a quad precision dtype for
> numpy on supported platforms, using the __float128 + quadmath lib from
> gcc. I have noticed invalid (and unexpected) downcast to long double
> in some cases, especially for ufuncs (e.g. when I don't define my own
> ufunc for a given operation).
>
> Looking down in numpy ufunc machinery, I can see that the issue is
> coming from the assumption that long double is the highest precision
> possible for a float type, and the only way I can 'fix' this is to
> define kind to a value != 'f' in my dtype definition (in which case I
> get an expected invalid cast exception). Is there a way to still avoid
> those casts while keeping the 'f' kind ?

I never looked at that code, but why not change the ufunc to remove
the current assumption? I suppose if you ask the questions is that
this is not trivial to do?

Fred



More information about the NumPy-Discussion mailing list