is int(round(val)) safe?

Bengt Richter bokr at oz.net
Tue Nov 23 14:33:58 EST 2004


On Tue, 23 Nov 2004 10:50:23 -0600, Mike Meyer <mwm at mired.org> wrote:

>bokr at oz.net (Bengt Richter) writes:
>
>> On Mon, 22 Nov 2004 15:58:54 -0500, Peter Hansen <peter at engcorp.com> wrote:
>>>Russell E. Owen wrote:
>>>The problem* with floating point is inaccurate representation
>>>of certain _fractional_ values, not integer values.
>>
>> Well, you mentioned really large integers, and I think it's worth
>> mentioning that you can get inaccurate representation of certain of those
>> values too. I.e., what you really have (for ieee 754 doubles) is 53 bits
>> to count with in steps of one weighted unit, and the unit can be 2**0
>> or 2**otherpower, where otherpower has 11 bits to represent it, more or less
>> +- 2**10 with an offset for 53. If the unit step is 2**1, you get twice the range
>> of integers, counting by two's, which doesn't give you a way of representing the
>> odd numbers between accurately. So it's not only fractional values that can get
>> truncated on the right. Try adding 1.0 to 2.0**53 ;-)
>
>It's much easier than that to get integer floating point numbers that
>aren't correct. Consider:
>
>>>> long(1e70)
>10000000000000000725314363815292351261583744096465219555182101554790400L

Yes. I was just trying to identify the exact point where you lose 1.0 granularity.
The last number, with all ones in the available significant bits (including the hidden one)
is 2.**53-1.0

 >>> from ut.miscutil import prb
 >>> prb(2.**53-1.0)
 '11111111111111111111111111111111111111111111111111111'
 >>> prb(2.**53-1.0).count('1')
 53

the last power of 10 that is accurate is 22, and the reason is plain
when you look at the bits:

10**22 has more than 53 bits but only zeroes to the right of the 53, but 10**23
has a bit to the right.

 >>> prb(2.**53-1.0)
 '11111111111111111111111111111111111111111111111111111'
 >>> prb(10**22)
 '10000111100001100111100000110010011011101010110010010000000000000000000000'
 >>> prb(1e22)
 '10000111100001100111100000110010011011101010110010010000000000000000000000'

whereas for 23 1e23 != 10**23
 >>> prb(10**23)
 '10101001011010000001011000111111000010100101011110110100000000000000000000000'
 >>> prb(1e23)
 '10101001011010000001011000111111000010100101011110110000000000000000000000000'

The zeroes are eliminated if you use a power of 10/2 or 5, which is always odd
 >>> prb( 5**22)
 '1000011110000110011110000011001001101110101011001001'
 >>> prb( 5**23)
 '101010010110100000010110001111110000101001010111101101'
 >>> prb(2.**53-1.0)
 '11111111111111111111111111111111111111111111111111111'

Or in decimal terms:

 >>> 5.**22
 2384185791015625.0
 >>> 5**22
 2384185791015625L
 >>> 5.**23
 11920928955078124.0
 >>> 5**23
 11920928955078125L
 >>> 2**53
 9007199254740992L

so what makes 5.**22 ok is that

 >>> 5.**22 <= 2.**53-1
 True

>I don't know the details on 754 FP, but the FP I'm used to represents
>*all* numbers as a binary fraction times an exponent. Since .1 can't
>be represented exactly, 1e<anything> will be wrong if you ask for
>enough digits.
I don't understand the "since .1 ..." logic, but I agree with the second
part. Re *all* numbers, if you multiply the fraction represented by the
53 fractional bits of any number by 2**53 you get an integer that you can
consider to be multiplied by 2 ** (the exponent for the fraction - 53),
which doesn't change anything, so I did that, so I could talk about
counting by increments of 1 unit of least precision. But yes, the
usual description is as a fraction times a power of two.

>
>This recently caused someone to propose that 1e70 should be a long
>instead of a float. No one mentioned the idea of making
>
>[0-9]+[eE]+?[0-9]+ be of integer type, and
>
>[0-9]*.[0-9]+[eE][+-]?[0-9]+ be a float. [0-9]+[eE]-[0-9]+ would also
>be a float. No simple rule for this, unfortunately.
>
I wrote a little exact decimal module based on keeping decimal exponents and
a rational numerator/denominator pair, which allows keeping an exact representation
of any reasonable (that you might feel like typing ;-) literal, like 1e70, etc., e.g.,

 >>> ED('1e70')
 ED('1.0e70')
 >>> ED('1e70').astuple()
 (1, 1, 70)
 >>> ED('1.23e-45')
 ED('1.23e-45')
 >>> ED('1.23e-45').astuple()
 (123, 1, -47)
 >>> ED('0.1')
 ED('0.1')
 >>> ED('0.1').astuple()
 (1, 1, -1)


The reason I mention this is not because I think all floating constants should be represented
this way in final code, but that maybe they should in the compiler ast, before code has been
generated. At that point, it seems a shame to have done a premature lossy conversion to platform
floating point, since one might want to take the AST and generate code with other representations.
I.e.,

 >>> import compiler
 >>> compiler.parse('a=0.1')
 Module(None, Stmt([Assign([AssName('a', 'OP_ASSIGN')], Const(0.10000000000000001))]))
 >>> compiler.parse('0.1')
 Module(None, Stmt([Discard(Const(0.10000000000000001))]))

but
 >>> from ut.exactdec import ED
 >>> ED('0.1')
 ED('0.1')
 >>> ED('0.1').astuple()
 (1, 1, -1)

vs what's represented by the actual floating point bits

 >>> ED(0.1,'all')
 ED('0.1000000000000000055511151231257827021181583404541015625')
 >>> ED(0.1,'all').astuple()
 (1000000000000000055511151231257827021181583404541015625L, 1L, -55)

Anyway, tuple is an easy exact possibility for intermediate representation of the number.
Of course, I guess you'd have to tag it as being from a floating point literal or else a code
generator would lose that implicit representation directive for ordinary code generation ...

Regards,
Bengt Richter



More information about the Python-list mailing list