Bug in floating-point addition: is anyone else seeing this?

Henrique Dante de Almeida hdante at gmail.com
Thu May 22 00:36:15 EDT 2008


On May 22, 1:26 am, Henrique Dante de Almeida <hda... at gmail.com>
wrote:
> On May 21, 3:38 pm, Mark Dickinson <dicki... at gmail.com> wrote:
>
> >>> a = 1e16-2.
> > >>> a
> > 9999999999999998.0
> > >>> a+0.999     # gives expected result
> > 9999999999999998.0
> > >>> a+0.9999   # doesn't round correctly.
>
> > 10000000000000000.0
>
>  Notice that 1e16-1 doesn't exist in IEEE double precision:
>  1e16-2 == 0x1.1c37937e07fffp+53
>  1e16 == 0x1.1c37937e08p+53
>
>  (that is, the hex representation ends with "7fff", then goes to
> "8000").
>
>  So, it's just rounding. It could go up, to 1e16, or down, to 1e16-2.
> This is not a bug, it's a feature.

 I didn't answer your question. :-/

 Adding a small number to 1e16-2 should be rounded to nearest (1e16-2)
by default. So that's strange.

 The following code compiled with gcc 4.2 (without optimization) gives
the same result:

#include <stdio.h>

int main (void)
{
	double a;

	while(1) {
		scanf("%lg", &a);
		printf("%a\n", a);
		printf("%a\n", a + 0.999);
		printf("%a\n", a + 0.9999);
	}
}



More information about the Python-list mailing list