numpy numbers converted wrong

Travis Oliphant oliphant.travis at ieee.org
Thu Oct 26 19:25:42 EDT 2006


robert wrote:
> in Gnuplot (Gnuplot.utils) the input array will be converted to a Numeric float array as shown below. When I insert a numpy array into Gnuplot like that below, numbers 7.44 are cast to 7.0 
> Why is this and what should I do ? Is this bug in numpy or in Numeric?
> 
> 
> [Dbg]>>> m         #numpy array
> array([[  9.78109200e+08,   7.44000000e+00],
>        [  9.78454800e+08,   7.44000000e+00],
>        [  9.78541200e+08,   8.19000000e+00],
>        ..., 
>        [  1.16162280e+09,   8.14600000e+01],
>        [  1.16170920e+09,   8.10500000e+01],
>        [  1.16179560e+09,   8.16800000e+01]])
>
  [Dbg]>>> Numeric.asarray(m, Numeric.Float32)[:10]
> array([[ 9.78109184e+008,  7.00000000e+000],
>        [ 9.78454784e+008,  7.00000000e+000],
>        [ 9.78541184e+008,  8.00000000e+000],
>        [ 9.78627584e+008,  8.00000000e+000],
>        [ 9.78713984e+008,  8.00000000e+000],
>        [ 9.78973184e+008,  8.00000000e+000],
>        [ 9.79059584e+008,  8.00000000e+000],
>        [ 9.79145984e+008,  8.00000000e+000],
>        [ 9.79232384e+008,  9.00000000e+000],
>        [ 9.79318784e+008,  8.00000000e+000]],'f')
> [Dbg]>>> Numeric.asarray(m, Numeric.Float)[:10]
> array([[ 9.78109200e+008,  7.00000000e+000],
>        [ 9.78454800e+008,  7.00000000e+000],
>        [ 9.78541200e+008,  8.00000000e+000],
>        [ 9.78627600e+008,  8.00000000e+000],
>        [ 9.78714000e+008,  8.00000000e+000],
>        [ 9.78973200e+008,  8.00000000e+000],
>        [ 9.79059600e+008,  8.00000000e+000],
>        [ 9.79146000e+008,  8.00000000e+000],
>        [ 9.79232400e+008,  9.00000000e+000],
>        [ 9.79318800e+008,  8.00000000e+000]])

This is odd but we need to know the version numbers of both packages to 
help further.   For one, I'm surprised that you can use Numeric.asarray 
to force cast to Numeric.Float32 without raising an error.

Also, you can ask on numpy-discussion at lists.sourceforge.net to reach an 
audience more directly able to help.

> [Dbg]>>> 
> 
> 
> and why and what is:
> 
> [Dbg]>>> m[0,1]
> 7.44
> [Dbg]>>> type(_)
> <type 'numpy.float64'>
> [Dbg]>>> 
> 
> 
> does this also slow down python math computations? 

No, not necessarily (depends on what you mean).

Python floats are still Python floats.  NumPy provides, in addition, an 
array scalar for every "kind" of data that a NumPy array can be composed 
of.  This avoids the problems with being unable to find an appropriate 
Python scalar for a given data-type.  Where possible, the NumPy scalar 
inherits from the Python one.

By default, the NumPy scalars have their own math defined which uses the 
error-mode setting capabilities of NumPy to handle errors.  Right now, 
these operations are a bit slower than Python's built-ins because of the 
way that "mixed" calculations are handled.

For, the data-types that over-lap with Python scalars you can set things 
up so that NumPy scalars use the Python math instead if you want.  But, 
again, NumPy does nothing to change the way that Python numbers are 
calculated.


> should one better stay away from numpy in current stage of numpy development?

No, definitely not. Don't stay away.  NumPy 1.0 is out.




More information about the Python-list mailing list