HELP: restore my faith in Python

Jason Stokes jstok at bluedog.apana.org.au
Fri Mar 3 22:36:23 EST 2000


Holton, Steven [NCRTP:6125:EXCH] wrote in message
<38C00D46.8CC5BA67 at americasm10.nt.com>...
>I'm trying to learn Python, and wrote this script last night.  I was not
>expecting to see this behavior, and my faith is severely shaken:
>
>nose-46> python
>Python 1.5.2 (#1, Sep 17 1999, 20:15:36)  [GCC egcs-2.91.66
>19990314/Linux (egcs- on linux-i386
>Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>>> limit = 5
>>>> for target in range(10):
>...    n=((target/(limit+0.0))-(target/limit))*limit
>
>...    print target, n, int(n)
>...
>0 0.0 0
>1 1.0 1
>2 2.0 2
>3 3.0 3
>4 4.0 4
>5 0.0 0
>6 1.0 0     <=== huh?
>7 2.0 1     <=== et tu Brutus?
>8 3.0 3
>9 4.0 4
>
>Column 1 is the index, column 2 is 'n' as floating, and column 3 is
>'n' as an integer.  Except that it ain't.


"int" uses a fairly dumb algorithm to translate a float value to an integer
value.  It simply truncates the decimal part.  int(0.999999) = 0.

Your output of the float value, on the other hand, uses round-off to a given
precision.  For example, the actual value of entry 6 is neither 1.0, nor 0,
but slightly less than 1, due to roundoff errors and errors in
representation of fractions in decimal notation, which often have no precise
finite binary representation:

>>> "%.20f" % n
'0.99999999999999978000'

This is an inherent problem when using finite precision floating point
arithmetic, and isn't specific to a particular programming language.

When using floating point in anger, therefore, you need to establish error
bounds and levels of precision for your algorithms.  Instead of using int()
on floating point values, you need to round them off to a particular level
of precision.







More information about the Python-list mailing list