floating point in 2.0

Tim Peters tim.one at home.com
Fri Jun 8 00:29:11 EDT 2001


[Edward Jason Riedy]
> I've forgotten the exact language, but we're going to strongly
> recommend using Steele and White's algorithm by default:
>
> http://www.acm.org/pubs/citations/proceedings/pldi/93542/p112-steele/

Ack -- if you folks haven't learned anything else <wink> from the 16 years
of half-baked 754 support so far, believe me when I tell you that when it
comes to f.p., most vendors see "strongly recommended" and think "OK, we can
skip this one -- next.".  Make it required or don't bother -- really.
Vendors don't compete on mere "quality of implementation" f.p issues because
their customers overwhelmingly can't tell the difference so don't care.  But
make it a requirement, and it shows up as a checklist item on contracts.
They're drawn up by people who don't care either, but who see "must" and
think "must".  That's why you have near-universal support for denorms and
directed rounding modes today, but virtually nothing that makes intended
("recommended") use of the NaN bits for retrospection, and see most vendors
punt on implementing ("recommended") double-extended.

> Given 0.1 converted to binary, this will print 0.1.

Even if I ask for, e.g., a %.55g format?  Steele & White think that's
appropriate, but they did this work in the context of the then-pending
Scheme standard.  After their paper appeared, IIRC it got a chilly reception
on David Hough's numeric-interest mailing list, because it's not "properly
rounded" in a clearly explainable sense.  If someone asks for 55 digits,
what "excuse" is there for not doing the best possible job of producing 55
digits?  There was also concern about needless loss of precision when, e.g.,
printing an IEEE float on one box and reading it back into an IEEE double.
"Round to the number of digits they ask for" allows for no loss of precision
when moving across formats.

> It's a complicated algorithm, but worth it.

"Worth it" surely depends on the goal.  The knock against Steele & White's
version of this is sloth, not complication; David Gay's faster version can
be fairly characterized as (very) complicated, though.

> And it's not a lie.  It returns the shortest decimal string which
> will reproduce the given binary number after a decimal->binary
> conversion.

Provided that the source and destination formats have the same precision,
and assuming correctly-rounded decimal->binary conversion too.

> Of course, you may worry about 0.1 + 0.1 + ... != 1.0.  We're
> also going to be giving a good deal of advice (and some
> requirements) on handling arithmetic.  I'm working on a paper
> with greater detail, but try the following C program for a
> taste of some intended advice:
> ========
> #include <stdio.h>
>
> /*
>   Accumulating in more precision than the data `deserves'
>   AND rounding the output gives the right answer for more
>   cases.
> */
>
> int
> main (void)
> {
>   int i;
>   float x = 0.1;
>   double accum = 0.0;
>   float accumf = 0.0;
>   float y;
>
>   for (i = 0; i < 100; ++i) {
>     accum += x;  /* accumulate in double... */
>     accumf += x;
>   }
>
>   y = (float) accum;  /* round back to float... */
>   printf ("%.36g\n%.36g\n%.36g\n", accumf, accum, y);
> }

Very good advice indeed.  What if they're accumulating in double instead?
Without reliable access to double-extended, I'm afraid we're left with
obscure transformations.  For most Python users, I expect decimal f.p. with
user-settable precision is going to be much easier to use.





More information about the Python-list mailing list