[Python-Dev] Round Bug in Python 1.6?

Tim Peters tim_one@email.msn.com
Sat, 8 Apr 2000 03:23:16 -0400


[Guido]
> Have a look at what Java does; it seems to be doing this right:
>
> & jpython
> JPython 1.1 on java1.2 (JIT: sunwjit)
> Copyright (C) 1997-1999 Corporation for National Research Initiatives
> >>> import java.lang
> >>> x = java.lang.Float(3.1416)
> >>> x.toString()
> '3.1416'
> >>>

That Java does this is not an accident:  Guy Steele pushed for the same
rules he got into Scheme, although

a) The Java rules are much tighter than Scheme's.

and

b) He didn't prevail on this point in Java until version 1.1 (before then
Java's double/float->string never produced more precision than ANSI C's
default %g format, so was inadequate to preserve equality under I/O).

I suspect there was more than a bit of internal politics behind the delay,
as the 754 camp has never liked the "minimal width" gimmick(*), and Sun's C
and Fortran numerics (incl. their properly-rounding libc I/O routines) were
strongly influenced by 754 committee members.

> Could it be as simple as converting x +/- one bit and seeing how many
> differing digits there were?  (Not that +/- one bit is easy to
> calculate...)

Sorry, it's much harder than that.  See the papers (and/or David Gay's code)
I referenced before.


(*) Why the minimal-width gimmick is disliked:  If you print a (32-bit) IEEE
float with minimal width, then read it back in as a (64-bit) IEEE double,
you may not get the same result as if you had converted the original float
to a double directly.  This is because "minimal width" here is *relative to*
the universe of 32-bit floats, and you don't always get the same minimal
width if you compute it relative to the universe of 64-bit doubles instead.
In other words, "minimal width" can lose accuracy needlessly -- but this
can't happen if you print the float to full precision instead.