Bug in floating-point addition: is anyone else seeing this?

Henrique Dante de Almeida hdante at gmail.com
Thu May 22 13:34:32 EDT 2008


On May 22, 6:09 am, Ross Ridge <rri... at caffeine.csclub.uwaterloo.ca>
wrote:
> Henrique Dante de Almeida  <hda... at gmail.com> wrote:
>
> > Finally (and the answer is obvious). 387 breaks the standards and
> >doesn't use IEEE double precision when requested to do so.
>
> Actually, the 80387 and the '87 FPU in all other IA-32 processors
> do use IEEE 745 double-precision arithmetic when requested to do so.

 True. :-/

 It seems that it uses a flag to control the precision. So, a
conformant implementation would require saving/restoring the flag
between calls. No wonder why gcc doesn't try to do this.

 There are two possible options for python, in that case:

 - Leave it as it is. The python language states that floating point
operations are based on the underlying C implementation. Also, the
relative error in this case is around 1e-16, which is smaller than the
expected error for IEEE doubles (~2e-16), so the result is non-
standard, but acceptable (in the general case, I believe the rounding
error could be marginally bigger than the expected error in extreme
cases, though).

 - Use long doubles for archictectures that don't support SSE2 and use
SSE2 IEEE doubles for architectures that support it.

 A third option would be for python to set the x87 precision to double
and switch it back to extended precision when calling C code (that
would be too much work for nothing).



More information about the Python-list mailing list