[PEP draft 2] Adding new math operators

Huaiyu Zhu hzhu at localhost.localdomain
Fri Aug 11 13:22:01 EDT 2000


On 11 Aug 2000 11:26:01 +0200, Konrad Hinsen <hinsen at cnrs-orleans.fr> wrote:
>
>> In statistical computations it is common to switch context of doing
>> elementwise computation (ie assuming independence between the components)
>> and do full matrix operation (taking into account correlation, etc). 
>
>Sounds familiar as well - and my solution is the same. The statistical
>operation is encapsulated in a function or method (as one would do
>anyway!), which contains all the matrix operations.

I'm not sure I follow you here.  Let's say A is a matrix.  The expression
(using matrixwise notation)

    B = A ~- mean(A)

would be a very common construct that occur in many formulas which are
dominantly linear algebra type.  In matlab it is done as

    B = A - ones(size(a,1),1)*mean(A)

which is not efficient.  Now we could use

    def deviation(A): return (A.array-mean(A).array).matrix
    B = deviation(A)

but to do it at this level, we'll need hundreds or thousands of special
functions of this sort.  What about 

    B = scaling ~* (A ~- mean(A))

where scaling is also a row vector?  Do we define scaleddeviation(A)?  If we
really are going to use function calls in these situations, the only way to
avoid proliferation of special functions would be like

    def esub(a,b): return (a.array-b.array).matrix
    def emul(a,b): return (a.array*b.array).matrix
    B = emul(scaling, esub(A, mean(A)))

But then we are back to square one.  (I think I don't need to show why these
formulas could occur in predominately linear algebra settings.)

These operations are also interchangeable as algorithms evolve.  Suppose A
is a collection of time series it is common to change scaling into a
filtering and we have

   B = filtering * (A ~- mean(A))

where filtering is now a matrix and the * is matrixwise multiplication.

>In both of these cases, the "diferent-logic" parts of the code are
>complex enough that they would be seen as separate algorithmis steps
>and written as functions or methods. This is very different from the
>articifial examples you used in which switches between the "array" and
>"matrix" point of view occur three times in a single line.

I'm not sure what you mean by "complex enough".  In my examples they are as
basic as the four basic elementwise operations.  The composition of these
operations could be considered as complex enough, but there is an unlimited
supply of them.   The examples I gave in previous posts only look artificial
because I've stripped the application related details.  I could have
expanded each line into an example similar to the above (which is still
simplified) and they would not look artificial.  

If you consider each operation as switching the POV of objects, it indeed
look excessive.  However, in another POV, you could consider it as doing
another type of operation once or twice amid many operations of one dominant
type, and you can see it come up naturally.

Of course, it could be the case that your examples are of very different
characteristics.  Then it would help if we can see some of them here.

I also think there is a selective bias here: the early adopters of the NumPy
interface are more likely to be satisfying with array-only operations,
precisely because their applications are of such characteristics.  Those who
do predominantly matrixwise operations are largely discouraged by existing
interface.  (This is definitely my own opinion only and it does not count as
much as any concrete example.)

Huaiyu



More information about the Python-list mailing list