[Slightly OT]: More on ints and floats

Chad Netzer cnetzer at mail.arc.nasa.gov
Mon Apr 7 18:55:43 EDT 2003


On Mon, 2003-04-07 at 15:03, Tim Daneliuk wrote:
> He argues that
> any place one can use an int, these days (with cheap FP hardware), one
> could use a float 0 extended to the precision of the machine and get
> equivalent computational results. Say the hardware supported 4 digits of
> precision. He is arguing that:
> 
> 
>     3/4.0000  is equivalent to 3.0000/4.0000

If you have a 32 bit int, and a 32 bit float, the ints are NOT a proper
subset of the float operations.  You need at least 64 bit doubles, to
have the same granularity as a 32 bit int.  If you use 64 bit ints, then
even a "long double" may not have 64 bits of integer granularity.  If
you try to use a float as a file pointer, or a double as a 64 bit long
file pointer, you WILL run into a whole lot of problems on large data
sets.  See below:

$ python
Python 2.2.2 (#1, Mar 21 2003, 23:01:54)
>>> import sys
>>> import Numeric as Num
>>>
>>> a = Num.array( sys.maxint, Num.Float32 )
>>> b = Num.array( 1, Num.Float32 )
>>>
>>> sys.maxint
2147483647
>>> a
2147483648.0
>>> a == sys.maxint
0
>>> b
1.0
>>> a-b == sys.maxint
0
>>> a-b
2147483648.0
>>> a-b-b
2147483648.0

So, subtracting 1 from a large valued float does not change it's value
(and in fact subtracting 64 from this same float will not change it).


Because you need (typically) a 64 bit double to have all the granularity
of a 32 bit int.  On most modern CPUs, although the CPUs have made much
progress on doing fast float/double operations, ints are still typically
faster.  Often there are more integer execution units then float units
in the chip itself (ie. more integer operations can be retired than
floats or doubles, in the same amount of time).  Which "vector" floating
point processors, the difference in speed is not as cut and dried, but
the precision porblem still exists.


So, I'd say your friend is basically wrong.






More information about the Python-list mailing list