[Python-Dev] Re: PEP239 (Rational Numbers) Reference Implementation and new issues

Oren Tirosh oren-py-d@hishome.net
Thu, 3 Oct 2002 03:21:47 -0400


On Thu, Oct 03, 2002 at 02:01:49AM -0400, Tim Peters wrote:
> [Oren Tirosh]
> > But why would you want to convert floating point numbers to rationals
> > in the first place?
> 
> For one thing, to continue an exact calculation that has grown too large to
> continue in floating-point without losing information, but where the extreme
> expense of using rationals instead of floating point pushes you toward using
> fp just as long as you can.
> 
> > Floating point numbers are generally inexact.
> 
> The IEEE-754 standard mandates an "inexact flag" (and your HW has one) for
> this very kind of purpose.  Your FPU knows for certain whether you've ever
> lost a bit of information when doing basic fp operations; unfortunately,
> it's still extremely clumsy to get at this info.

The IEEE-754 standard also mandates NaNs, Infs, rounding modes and several
other features. These are not supported by Python in a consistent and
portable manner nor it is likely that they can be supported on all Python
platforms in the foreseeable future because of the limitations of the
underlying C libraries.

> > Converting them to exact rationals can give a false sense of
> > exactitude.
> 
> So can printing more digits than warranted <wink>.

Yes, my material strength teacher drove this point home by taking off 
points for results with either too many or too few digits.

> > Any calculation involving a floating point number should taint
> > the result with inexactitude and yield a floating point result.
> 
> How old are you?  See -- integers can be inexact too.  Representation has
> nothing to do with exactness, which is one part of this story Scheme got
> straight.

I am approaximate 61131/76 fortnights old :-)

A integer is always exact and integer arithmetic is always exact with the 
exception of division. The mapping of this mathematical integer to some 
real-world meaning may be inexact, though. I won't sue anyone because the 
pack of rubber bands doesn't contain exactly 1000 pieces. 

A double precision floating point number may represent some number 
exactly and produce exact results for some arithmetic operations but in
practice if I type 0.3 and get 0.29999999999999999 the numbers aren't 
exact in the first place and if I have no access to the flags that tell me 
if the result is exact or not I must treat all results as inexact.

As usual, your arguments are flawless - in an academic sort of way. In 
practice, though, I think my original point about inexactness of of floats 
and tainting of results still stands. 

	Oren