[Numpy-discussion] numpy.nansum() behavior in 1.3.0

josef.pktd at gmail.com josef.pktd at gmail.com
Mon Jun 1 19:50:53 EDT 2009


On Mon, Jun 1, 2009 at 7:43 PM,  <josef.pktd at gmail.com> wrote:
> On Mon, Jun 1, 2009 at 7:30 PM, Robert Kern <robert.kern at gmail.com> wrote:
>> On Mon, Jun 1, 2009 at 15:31,  <josef.pktd at gmail.com> wrote:
>>> On Mon, Jun 1, 2009 at 4:06 PM, Alan G Isaac <aisaac at american.edu> wrote:
>>>> On 6/1/2009 3:38 PM josef.pktd at gmail.com apparently wrote:
>>>>> Here's a good one:
>>>>>
>>>>>>>> np.isnan([]).all()
>>>>> True
>>>>>>>> np.isnan([]).any()
>>>>> False
>>>>
>>>>
>>>>  >>> all([])
>>>> True
>>>>  >>> any([])
>>>> False
>>>
>>> also:
>>>
>>>>>> y
>>> array([], dtype=float64)
>>>>>> (y>0).all()
>>> True
>>>>>> (y>0).any()
>>> False
>>>>>> ((y>0)>0).sum()
>>> 0
>>>
>>> I don't know what's the logic, but it causes the bug in np.nansum.
>>
>> You will have to special-case empty arrays, then.
>>
>
> is np.size the right check for non-empty array, including subtypes?
>
> i.e.
>
> if y.size and mask.all():
>        return np.nan
>
> or more explicit
> if y.size > 0 and mask.all():
>        return np.nan
>

Actually, now I think this is the wrong behavior, nansum should never
return nan.

>>> np.nansum([np.nan, np.nan])
1.#QNAN

shouldn't this be zero

Josef



More information about the NumPy-Discussion mailing list