[Numpy-discussion] 16bit Integer Array/Scalar Inconsistency
Robert Kern
robert.kern at gmail.com
Thu Aug 2 15:31:07 EDT 2007
Ryan May wrote:
> Hi,
>
> I ran into this while debugging a script today:
>
> In [1]: import numpy as N
>
> In [2]: N.__version__
> Out[2]: '1.0.3'
>
> In [3]: d = N.array([32767], dtype=N.int16)
>
> In [4]: d + 32767
> Out[4]: array([-2], dtype=int16)
>
> In [5]: d[0] + 32767
> Out[5]: 65534
>
> In [6]: type(d[0] + 32767)
> Out[6]: <type 'numpy.int64'>
>
> In [7]: type(d[0])
> Out[7]: <type 'numpy.int16'>
>
> It seems that numpy will automatically promote the scalar to avoid
> overflow, but not in the array case. Is this inconsistency a bug, just
> a (known) gotcha?
Known feature. When arrays and scalars are mixed and the types are within the
same kind (e.g. both are integer types just at different precisions), the type
of the scalar is ignored. This solves one of the usability issues with trying to
use lower precisions; you still want to be able to divide by 2.0, for example,
without automatically up-casting your very large float32 array.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion
mailing list