is int(round(val)) safe?

Mike Meyer mwm at mired.org
Tue Nov 23 11:50:23 EST 2004


bokr at oz.net (Bengt Richter) writes:

> On Mon, 22 Nov 2004 15:58:54 -0500, Peter Hansen <peter at engcorp.com> wrote:
>>Russell E. Owen wrote:
>>The problem* with floating point is inaccurate representation
>>of certain _fractional_ values, not integer values.
>
> Well, you mentioned really large integers, and I think it's worth
> mentioning that you can get inaccurate representation of certain of those
> values too. I.e., what you really have (for ieee 754 doubles) is 53 bits
> to count with in steps of one weighted unit, and the unit can be 2**0
> or 2**otherpower, where otherpower has 11 bits to represent it, more or less
> +- 2**10 with an offset for 53. If the unit step is 2**1, you get twice the range
> of integers, counting by two's, which doesn't give you a way of representing the
> odd numbers between accurately. So it's not only fractional values that can get
> truncated on the right. Try adding 1.0 to 2.0**53 ;-)

It's much easier than that to get integer floating point numbers that
aren't correct. Consider:

>>> long(1e70)
10000000000000000725314363815292351261583744096465219555182101554790400L

I don't know the details on 754 FP, but the FP I'm used to represents
*all* numbers as a binary fraction times an exponent. Since .1 can't
be represented exactly, 1e<anything> will be wrong if you ask for
enough digits.

This recently caused someone to propose that 1e70 should be a long
instead of a float. No one mentioned the idea of making

[0-9]+[eE]+?[0-9]+ be of integer type, and

[0-9]*.[0-9]+[eE][+-]?[0-9]+ be a float. [0-9]+[eE]-[0-9]+ would also
be a float. No simple rule for this, unfortunately.

   <mike

-- 
Mike Meyer <mwm at mired.org>			http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.



More information about the Python-list mailing list