[Python-Dev] Expert floats

Andrew Koenig ark-mlist at att.net
Wed Mar 31 13:09:56 EST 2004


> Pragmatically more convenient by what metric?

Short output is easier to read than long output.

> No matter how you slice it, binary floating point contains surprises
> for the uninitiated.  The question is *WHEN* do you hammer the point home?
> I've yet to see you address this directly.

I haven't, because I'm unconvinced that there is a single right answer.

Decimal floating-point has almost all the pitfalls of binary floating-point,
yet I do not see anyone arguing against decimal floating-point on the basis
that it makes the pitfalls less apparent.

> > If I can enter a number as 0.1, printing that number as 0.1 does not
> > introduce any errors that were not already there, as proved by the
> > fact that reading that 0.1 back will yield exactly the same value.

> It's not a matter of introducing errors, it's a matter of making the
> errors visible.  Python is, among other things, a language suitable for
> introducing people to computers.  That's why the Zen of Python contains
> such gems as

>     Explicit is better than implicit.
>     Errors should never pass silently.
>     In the face of ambiguity, refuse the temptation to guess.

> If you're going to continue pressing your point, please elucidate your
> reasoning in terms of Python's design principles.

Beautiful is better than ugly.
Simple is better than complex.
Readability counts.

When I write programs that print floating-point numbers I usually want to
see one of the following:

	* a rounded representation with n significant digits,
	  where n is significantly less than 17
	* a rounded representation with n digits after the decimal point,
	  where n is often 2
	* the unbounded-precision exact decimal representation of the
	  number (which always exists, because every binary floating-point
	  number has a finite exact decimal representation)
	* the most convenient (i.e. shortest) way of representing the
	  number that will yield exactly the same result when read

Python gives me none of these, and instead gives me something else entirely
that is almost never what I would like to see, given the choice.  I
understand that I have the option of requesting the first two of these
choices explicitly, but I don't think there's a way to make any of them the
default.

I'm not picking on Python specifically here, as I have similar objections to
the floating-point behavior of most other languages aside from Scheme (which
is not to my taste for other reasons).  However, I do think that this issue
is more subtle than one that can be settled by appealing to slogans.  In
particular, I *do* buy the argument that the current behavior is the best
that can be efficiently achieved while relying on the underlying C
floating-point conversions.

If you're really serious about hammering errors in early, why not have the
compiler issue a warning any time a floating-point literal cannot be exactly
represented?  <0.5 wink>




More information about the Python-Dev mailing list