[SciPy-User] Return type of scipy.interpolate.splev for input array of length 1

josef.pktd at gmail.com josef.pktd at gmail.com
Wed Jan 20 16:18:39 EST 2010


On Wed, Jan 20, 2010 at 3:00 PM, Anne Archibald
<peridot.faceted at gmail.com> wrote:
> 2010/1/19 Pauli Virtanen <pav+sp at iki.fi>:
>> Mon, 18 Jan 2010 10:59:46 -0500, josef.pktd wrote:
>>> On Sun, Jan 17, 2010 at 5:25 AM, Yves Frederix <yves.frederix at gmail.com>
>>> wrote:
>> [clip]
>>>> It was rather unexpected that the type of input and output data are
>>>> different. After checking interpolate/fitpack.py it seems that this
>>>> behavior results from the fact that the length-1 case is explicitly
>>>> treated differently (probably to be able to deal with the case of
>>>> scalar input, for which scalar output is expected):
>>>>
>>>>  434 def splev(x,tck,der=0):
>>>>  <snip>
>>>>  487         if ier: raise TypeError,"An error occurred" 488
>>>>  if len(y)>1: return y 489         return y[0]
>>>>  490
>>>>
>>>> Wouldn't it be less confusing to have the return value always have the
>>>> same type as the input data?
>>>
>>> I don't know of any "official" policy.
>>
>> I think (unstructured) interpolation should respect
>>
>>        input.shape == output.shape
>>
>> also for 0-d. So yes, it's a wart, IMHO.
>>
>> Another question is: how many people actually have code that depends on
>> this wart, and can it be fixed? I'd guess there's not much problem: (1,)
>> arrays function nicely as scalars, but not vice versa because of
>> mutability.
>
> More generally, I think many functions should preserve the shape of
> the input array. Unfortunately it's often a hassle to do this: a few
> functions I have written start by checking whether the input is a
> scalar, setting a boolean and converting it to an array of size one;
> then at the end, I check the boolean and strip the array wrapping if
> the input is a scalar. It's annoying boilerplate, and I suspect that
> many functions don't handle this just because it's a nuisance. Some
> handy utility code might help.
>
> It would also be good to have a generic test one could apply to many
> functions to check that they preserve array shapes (0-d, 1-d of size
> 1, many-dimensional, many-dimensional with a zero dimension),  and
> scalarness. Together with a test for preservation of arbitrary array
> subclasses (and correct functioning when handed matrices), one might
> be able to shake out a lot of minor easy-to-fix nuisances.
>
> Anne
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>

I just checked again, the conversion in the distribution is weaker

        if output.ndim == 0:
            return output[()]

as a result:

>>> stats.norm.pdf(np.array([1]))
array([ 0.24197072])
>>> stats.norm.pdf(np.array(1))
0.24197072451914337

I just followed the pattern of Travis in this.

Handling and preserving array subclasses is a lot of work and
increases the size of simple functions considerably and triples (? not
checked) the number of required tests (I just tried with stats.gmean,
hmean and zscore). I don't see a way to write generic tests that would
work across different signatures and argument types.

Josef



More information about the SciPy-User mailing list